Forum » Znanost in tehnologija » Zakaj so električni avtomobili prava stvar
Zakaj so električni avtomobili prava stvar
Temo vidijo: vsi
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Okapi ::
Ko sta Musk in Tesla pristala na poravnavo, sta v bistvu priznala, da "funding" ni bil "secured". Če bi imeli kakršenkoli dokaz, ne bi pristali na 40 milijonov kazni in triletno prepoved Muskovega predsednikovanja upravnemu odboru.
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Zheegec ::
A je kje kdo dokazal, da funding ni bil secured?
Tesla in Elon z izjavo, da ne grejo private? In posledično plačilo kazni?
"božja zapoved pravi; <Spoštuj očeta in mater>,
ne govori pa o spoštovanju sodstva."
Janez Janša, 29.04.2014
ne govori pa o spoštovanju sodstva."
Janez Janša, 29.04.2014
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Utk ::
Privat niso šli, ker se ni dalo tako kot si je Musk zamislil, da bi ostalo ne vem kolk lastnikov. Ni bil "funding" edini problem, vprašanje kolk so se z njim sploh ukvarjali.
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Zheegec ::
Bi rekel, da so se zelo malo ukvarjali s fundingom ![:))](https://static.slo-tech.com/smeski/icon_lol.gif)
![:))](https://static.slo-tech.com/smeski/icon_lol.gif)
![8-)](https://static.slo-tech.com/smeski/icon_cool.gif)
![:))](https://static.slo-tech.com/smeski/icon_lol.gif)
![:))](https://static.slo-tech.com/smeski/icon_lol.gif)
![8-)](https://static.slo-tech.com/smeski/icon_cool.gif)
![8-)](https://static.slo-tech.com/smeski/icon_cool.gif)
"božja zapoved pravi; <Spoštuj očeta in mater>,
ne govori pa o spoštovanju sodstva."
Janez Janša, 29.04.2014
ne govori pa o spoštovanju sodstva."
Janez Janša, 29.04.2014
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Nikec3 ::
Privat niso šli, ker se ni dalo tako kot si je Musk zamislil, da bi ostalo ne vem kolk lastnikov. Ni bil "funding" edini problem, vprašanje kolk so se z njim sploh ukvarjali.
Daj ne nakladaj. V poročilu je jasno, da je Musk funding secured twittal v afektu.
Druge novice:
https://avto.finance.si/8940000/Konec-k...
Največji avtomobilski trg na svetu je zašel v manjšo krizo. Prodaja avtomobilov namreč pada že zadnje tri mesece, septembrski 11,6 odstotni padec (prodaja je dosegla 2,39 milijona vozil) pa je bil največji po januarju 2012, ko je kar 26,4 odstoten upad povzročil nov datum počitnic na Kitajskem.
Prodaja električnih vozil in priključnih hibridov kljub ohlajanju trga še naprej močno raste. Septembra za 54,8 odstotka. V prvih devetih mesecih pa za 81 odstotkov na 721 tisoč vozil.
@WarpedOne o Elonu Musku:
"ST inteligenca serijskemu izdelovalcu "čudežev" očita pomanjkanje inteligence"
"ST inteligenca serijskemu izdelovalcu "čudežev" očita pomanjkanje inteligence"
Zgodovina sprememb…
- spremenil: Nikec3 ()
![](https://static.slo-tech.com/stili/avatar_gray.gif)
J.McLane ::
Seveda, vedno več Kitajcev raje čaka na svoj EV, kot pa da kupi nekaj kar jim bo država prej ali slej nenormalno obdavčila ali celo vzela.
Simplicity is the ultimate sophistication - Leonardo da Vinci
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Nikec3 ::
Ob tej priložnosti bi želel povedati še, da tržni delež EV-jev raste hitreje, kot je dve leti nazaj napovedoval @Okapi.
![;)](https://static.slo-tech.com/smeski/icon_wink.gif)
@WarpedOne o Elonu Musku:
"ST inteligenca serijskemu izdelovalcu "čudežev" očita pomanjkanje inteligence"
"ST inteligenca serijskemu izdelovalcu "čudežev" očita pomanjkanje inteligence"
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Utk ::
Daj ne nakladaj. V poročilu je jasno, da je Musk funding secured twittal v afektu.Če je bilo v afektu, še ne pomeni, da ni bilo nič na tem.
![](https://static.slo-tech.com/stili/avatar_gray.gif)
celada ::
Daj ne nakladaj. V poročilu je jasno, da je Musk funding secured twittal v afektu.Če je bilo v afektu, še ne pomeni, da ni bilo nič na tem.
Ce bi blo realno ne bi blo 40 mil kazni.
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Utk ::
Ob tej priložnosti bi želel povedati še, da tržni delež EV-jev raste hitreje, kot je dve leti nazaj napovedoval @Okapi.![]()
Škoda govorit kje je meja te rasti dokler ni dovolj ponudbe. Če bi danes imeli take električne avte kot so, nič boljše, samo da jih ponuja vsak proizvajalec, bi bil delež čez noč zelo visok, vsaj 25%.
Daj ne nakladaj. V poročilu je jasno, da je Musk funding secured twittal v afektu.Če je bilo v afektu, še ne pomeni, da ni bilo nič na tem.
Ce bi blo realno ne bi blo 40 mil kazni.
Če bi blo tako enostavno, bi bil ti Musk.
Zgodovina sprememb…
- spremenil: Utk ()
![](https://static.slo-tech.com/stili/avatar_gray.gif)
jernejl ::
Škoda govorit kje je meja te rasti dokler ni dovolj ponudbe. Če bi danes imeli take električne avte kot so, nič boljše, samo da jih ponuja vsak proizvajalec, bi bil delež čez noč zelo visok, vsaj 25%.
Ja, če bi jih naredili več, bi jih bilo več.
Tudi če bi delali manj ICE avtomobilov, bi bil delež električnih višji.
Ampak to ni kako posebej revolucionarno odkritje.
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Utk ::
Seveda je, za tiste, ki mislijo da je problem v povpraševanju. Pa v tem, da jih ne morejo polnit v 10. štuku, čez 5 let baterije crknejo in ne vem kaj vse. Nič od tega ni ovira za ogromno rast še kar nekaj let.
![](https://static.slo-tech.com/stili/avatar_gray.gif)
jernejl ::
Seveda je, za tiste, ki mislijo da je problem v povpraševanju.
Če bi bilo povpraševanje večje, kot je, bi se jih prodalo (in naredilo) več, kot se jih trenutno proda (in naredi)... po višjih cenah. In obratno.
Če zaradi kakršnega koli (nerazumnega) vzroka proizvajalci ne bi želeli postaviti tržne cene, ampak bi jih prodajali po nižji ceni od tržne, in ponudba ne bi mogla dosegati povpraševanja, bi to reguliral trg rabljenih EV-jev. In ni videti, da bi se rabljeni EV-ji v splošnem prodajali po cenah, ki so višje od cen novih EV-jev.
![](https://static.slo-tech.com/stili/bel_non_grata.png)
WarpedGone ::
NN Changes in V9 (2018.39.7)
Have not had much time to look at V9 yet, but I though I'd share some interesting preliminary analysis. Please note that network size estimates here are spreadsheet calculations derived from a large number of raw kernel specifications. I think they're about right and I've checked them over quite carefully but it's a lot of math and there might be some errors.
First, some observations:
Like V8 the V9 NN (neural net) system seems to consist of a set of what I call 'camera networks' which process camera output directly and a separate set of what I call 'post processing' networks that take output from the camera networks and turn it into higher level actionable abstractions. So far I've only looked at the camera networks for V9 but it's already apparent that V9 is a pretty big change from V8.
---------------
One unified camera network handles all 8 cameras
Same weight file being used for all cameras (this has pretty interesting implications and previously V8 main/narrow seems to have had separate weights for each camera)
Processed resolution of 3 front cameras and back camera: 1280x960 (full camera resolution)
Processed resolution of pillar and repeater cameras: 640x480 (1/2x1/2 of camera's true resolution)
all cameras: 3 color channels, 2 frames (2 frames also has very interesting implications)
(was 640x416, 2 color channels, 1 frame, only main and narrow in V8)
------------
Various V8 versions included networks for pillar and repeater cameras in the binaries but AFAIK nobody outside Tesla ever saw those networks in operation. Normal AP use on V8 seemed to only include the use of main and narrow for driving and the wide angle forward camera for rain sensing. In V9 it's very clear that all cameras are being put to use for all the AP2 cars.
The basic camera NN (neural network) arrangement is an Inception V1 type CNN with L1/L2/L3ab/L4abcdefg layer arrangement (architecturally similar to V8 main/narrow camera up to end of inception blocks but much larger)
about 5x as many weights as comparable portion of V8 net
about 18x as much processing per camera (front/back)
The V9 network takes 1280x960 images with 3 color channels and 2 frames per camera from, for example, the main camera. That's 1280x960x3x2 as an input, or 7.3M. The V8 main camera was 640x416x2 or 0.5M - 13x less data.
For perspective, V9 camera network is 10x larger and requires 200x more computation when compared to Google's Inception V1 network from which V9 gets it's underlying architectural concept. That's processing *per camera* for the 4 front and back cameras. Side cameras are 1/4 the processing due to being 1/4 as many total pixels. With all 8 cameras being processed in this fashion it's likely that V9 is straining the compute capability of the APE. The V8 network, by comparison, probably had lots of margin.
network outputs:
V360 object decoder (multi level, processed only)
back lane decoder (back camera plus final processed)
side lane decoder (pillar/repeater cameras plus final processed)
path prediction pp decoder (main/narrow/fisheye cameras plus final processed)
"super lane" decoder (main/narrow/fisheye cameras plus final processed)
Previous V8 aknet included a lot of processing after the inception blocks - about half of the camera network processing was taken up by non-inception weights. V9 only includes inception components in the camera network and instead passes the inception processed outputs, raw camera frames, and lots of intermediate results to the post processing subsystem. I have not yet examined the post processing subsystem.
And now for some speculation:
Input changes:
The V9 network takes 1280x960 images with 3 color channels and 2 frames per camera from, for example, the main camera. That's 1280x960x3x2 as an input, or 7.3MB. The V8 main camera processing frame was 640x416x2 or 0.5MB - 13x less data. The extra resolution means that V9 has access to smaller and more subtle detail from the camera, but the more interesting aspect of the change to the camera interface is that camera frames are being processed in pairs. These two pairs are likely time-offset by some small delay - 10ms to 100ms I'd guess - allowing each processed camera input to see motion. Motion can give you depth, separate objects from the background, help identify objects, predict object trajectories, and provide information about the vehicle's own motion. It's a pretty fundamental improvement to the basic perceptions of the system.
Camera agnostic:
The V8 main/narrow network used the same architecture for both cameras, but by my calculation it was probably using different weights for each camera (probably 26M each for a total of about 52M). This make sense because main/narrow have a very different FOV, which means the precise shape of objects they see varies quite a bit - especially towards the edges of frames. Training each camera separately is going to dramatically simplify the problem of recognizing objects since the variation goes down a lot. That means it's easier to get decent performance with a smaller network and less training. But it also means you have to build separate training data sets, evaluate them separately, and load two different networks alternately during operation. It also means that you network can learn some bad habits because it always sees the world in the same way.
Building a camera agnostic network relaxes these problems and simultaneously makes the network more robust when used on any individual camera. Being camera agnostic means the network has to have a better sense of what an object looks like under all kinds of camera distortions. That's a great thing, but it's very, *very* expensive to achieve because it requires a lot of training, a lot of training data and, probably, a really big network. Nobody builds them so it's hard to say for sure, but these are probably safe assumptions.
Well, the V9 network appears to be camera agnostic. It can process the output from any camera on the car using the same weight file.
It also has the fringe benefit of improved computational efficiency. Since you just have the one set of weights you don't have to constantly be swapping weight sets in and out of your GPU memory and, even more importantly, you can batch up blocks of images from all the cameras together and run them through the NN as a set. This can give you a multiple of performance from the same hardware.
I didn't expect to see a camera agnostic network for a long time. It's kind of shocking.
Considering network size:
This V9 network is a monster, and that's not the half of it. When you increase the number of parameters (weights) in an NN by a factor of 5 you don't just get 5 times the capacity and need 5 times as much training data. In terms of expressive capacity increase it's more akin to a number with 5 times as many digits. So if V8's expressive capacity was 10, V9's capacity is more like 100,000. It's a mind boggling expansion of raw capacity. And likewise the amount of training data doesn't go up by a mere 5x. It probably takes at least thousands and perhaps millions of times more data to fully utilize a network that has 5x as many parameters.
This network is far larger than any vision NN I've seen publicly disclosed and I'm just reeling at the thought of how much data it must take to train it. I sat on this estimate for a long time because I thought that I must have made a mistake. But going over it again and again I find that it's not my calculations that were off, it's my expectations that were off.
Is Tesla using semi-supervised training for V9? They've gotta be using more than just labeled data - there aren't enough humans to label this much data. I think all those simulation designers they hired must have built a machine that generates labeled data for them, but even so.
And where are they getting the datacenter to train this thing? Did Larry give Elon a warehouse full of TPUs?
I mean, seriously...
I look at this thing and I think - oh yeah, HW3. We're gonna need that. Soon, I think.
Omnidirectionality (V360 object decoder):
With these new changes the NN should be able to identify every object in every direction at distances up to hundreds of meters and also provide approximate instantaneous relative movement for all of those objects. If you consider the FOV overlap of the cameras, virtually all objects will be seen by at least two cameras. That provides the opportunity for downstream processing use multiple perspectives on an object to more precisely localize and identify it.
General thoughts:
I've been driving V9 AP2 for a few days now and I find the dynamics to be much improved over recent V8. Lateral control is tighter and it's been able to beat all the V8 failure scenarios I've collected over the last 6 months. Longitudinal control is much smoother, traffic handling is much more comfortable. V9's ability to prospectively do a visual evaluation on a target lane prior to making a change makes the auto lane change feature a lot more versatile. I suspect detection errors are way down compared to V8 but I also see that a few new failure scenarios have popped up (offramp / onramp speed control seem to have some bugs). I'm excited to see how this looks in a couple of months after they've cleaned out the kinks that come with any big change.
Being an avid observer of progress in deep neural networks my primary motivation for looking at AP2 is that it's one of the few bleeding edge commercial applications that I can get my hands on and I use it as a barometer of how commercial (as opposed to research) applications are progressing. Researchers push the boundaries in search of new knowledge, but commercial applications explore the practical ramifications of new techniques. Given rapid progress in algorithms I had expected near future applications might hinge on the great leaps in efficiency that are coming from new techniques. But that's not what seems to be happening right now - probably because companies can do a lot just by scaling up NN techniques we already have.
In V9 we see Tesla pushing in this direction. Inception V1 is a four year old architecture that Tesla is scaling to a degree that I imagine inceptions's creators could not have expected. Indeed, I would guess that four years ago most people in the field would not have expected that scaling would work this well. Scaling computational power, training data, and industrial resources plays to Tesla's strengths and involves less uncertainty than potentially more powerful but less mature techniques. At the same time Tesla is doubling down on their 'vision first / all neural networks' approach and, as far as I can tell, it seems to be going well.
As a neural network dork I couldn't be more pleased.
Zbogom in hvala za vse ribe
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Truga ::
WarpedGone je izjavil:
NN Changes in V9 (2018.39.7)
Have not had much time to look at V9 yet, but I though I'd share some interesting preliminary analysis. Please note that network size estimates here are spreadsheet calculations derived from a large number of raw kernel specifications. I think they're about right and I've checked them over quite carefully but it's a lot of math and there might be some errors.
First, some observations:
Like V8 the V9 NN (neural net) system seems to consist of a set of what I call 'camera networks' which process camera output directly and a separate set of what I call 'post processing' networks that take output from the camera networks and turn it into higher level actionable abstractions. So far I've only looked at the camera networks for V9 but it's already apparent that V9 is a pretty big change from V8.
---------------
One unified camera network handles all 8 cameras
Same weight file being used for all cameras (this has pretty interesting implications and previously V8 main/narrow seems to have had separate weights for each camera)
Processed resolution of 3 front cameras and back camera: 1280x960 (full camera resolution)
Processed resolution of pillar and repeater cameras: 640x480 (1/2x1/2 of camera's true resolution)
all cameras: 3 color channels, 2 frames (2 frames also has very interesting implications)
(was 640x416, 2 color channels, 1 frame, only main and narrow in V8)
------------
Various V8 versions included networks for pillar and repeater cameras in the binaries but AFAIK nobody outside Tesla ever saw those networks in operation. Normal AP use on V8 seemed to only include the use of main and narrow for driving and the wide angle forward camera for rain sensing. In V9 it's very clear that all cameras are being put to use for all the AP2 cars.
The basic camera NN (neural network) arrangement is an Inception V1 type CNN with L1/L2/L3ab/L4abcdefg layer arrangement (architecturally similar to V8 main/narrow camera up to end of inception blocks but much larger)
about 5x as many weights as comparable portion of V8 net
about 18x as much processing per camera (front/back)
The V9 network takes 1280x960 images with 3 color channels and 2 frames per camera from, for example, the main camera. That's 1280x960x3x2 as an input, or 7.3M. The V8 main camera was 640x416x2 or 0.5M - 13x less data.
For perspective, V9 camera network is 10x larger and requires 200x more computation when compared to Google's Inception V1 network from which V9 gets it's underlying architectural concept. That's processing *per camera* for the 4 front and back cameras. Side cameras are 1/4 the processing due to being 1/4 as many total pixels. With all 8 cameras being processed in this fashion it's likely that V9 is straining the compute capability of the APE. The V8 network, by comparison, probably had lots of margin.
network outputs:
V360 object decoder (multi level, processed only)
back lane decoder (back camera plus final processed)
side lane decoder (pillar/repeater cameras plus final processed)
path prediction pp decoder (main/narrow/fisheye cameras plus final processed)
"super lane" decoder (main/narrow/fisheye cameras plus final processed)
Previous V8 aknet included a lot of processing after the inception blocks - about half of the camera network processing was taken up by non-inception weights. V9 only includes inception components in the camera network and instead passes the inception processed outputs, raw camera frames, and lots of intermediate results to the post processing subsystem. I have not yet examined the post processing subsystem.
And now for some speculation:
Input changes:
The V9 network takes 1280x960 images with 3 color channels and 2 frames per camera from, for example, the main camera. That's 1280x960x3x2 as an input, or 7.3MB. The V8 main camera processing frame was 640x416x2 or 0.5MB - 13x less data. The extra resolution means that V9 has access to smaller and more subtle detail from the camera, but the more interesting aspect of the change to the camera interface is that camera frames are being processed in pairs. These two pairs are likely time-offset by some small delay - 10ms to 100ms I'd guess - allowing each processed camera input to see motion. Motion can give you depth, separate objects from the background, help identify objects, predict object trajectories, and provide information about the vehicle's own motion. It's a pretty fundamental improvement to the basic perceptions of the system.
Camera agnostic:
The V8 main/narrow network used the same architecture for both cameras, but by my calculation it was probably using different weights for each camera (probably 26M each for a total of about 52M). This make sense because main/narrow have a very different FOV, which means the precise shape of objects they see varies quite a bit - especially towards the edges of frames. Training each camera separately is going to dramatically simplify the problem of recognizing objects since the variation goes down a lot. That means it's easier to get decent performance with a smaller network and less training. But it also means you have to build separate training data sets, evaluate them separately, and load two different networks alternately during operation. It also means that you network can learn some bad habits because it always sees the world in the same way.
Building a camera agnostic network relaxes these problems and simultaneously makes the network more robust when used on any individual camera. Being camera agnostic means the network has to have a better sense of what an object looks like under all kinds of camera distortions. That's a great thing, but it's very, *very* expensive to achieve because it requires a lot of training, a lot of training data and, probably, a really big network. Nobody builds them so it's hard to say for sure, but these are probably safe assumptions.
Well, the V9 network appears to be camera agnostic. It can process the output from any camera on the car using the same weight file.
It also has the fringe benefit of improved computational efficiency. Since you just have the one set of weights you don't have to constantly be swapping weight sets in and out of your GPU memory and, even more importantly, you can batch up blocks of images from all the cameras together and run them through the NN as a set. This can give you a multiple of performance from the same hardware.
I didn't expect to see a camera agnostic network for a long time. It's kind of shocking.
Considering network size:
This V9 network is a monster, and that's not the half of it. When you increase the number of parameters (weights) in an NN by a factor of 5 you don't just get 5 times the capacity and need 5 times as much training data. In terms of expressive capacity increase it's more akin to a number with 5 times as many digits. So if V8's expressive capacity was 10, V9's capacity is more like 100,000. It's a mind boggling expansion of raw capacity. And likewise the amount of training data doesn't go up by a mere 5x. It probably takes at least thousands and perhaps millions of times more data to fully utilize a network that has 5x as many parameters.
This network is far larger than any vision NN I've seen publicly disclosed and I'm just reeling at the thought of how much data it must take to train it. I sat on this estimate for a long time because I thought that I must have made a mistake. But going over it again and again I find that it's not my calculations that were off, it's my expectations that were off.
Is Tesla using semi-supervised training for V9? They've gotta be using more than just labeled data - there aren't enough humans to label this much data. I think all those simulation designers they hired must have built a machine that generates labeled data for them, but even so.
And where are they getting the datacenter to train this thing? Did Larry give Elon a warehouse full of TPUs?
I mean, seriously...
I look at this thing and I think - oh yeah, HW3. We're gonna need that. Soon, I think.
Omnidirectionality (V360 object decoder):
With these new changes the NN should be able to identify every object in every direction at distances up to hundreds of meters and also provide approximate instantaneous relative movement for all of those objects. If you consider the FOV overlap of the cameras, virtually all objects will be seen by at least two cameras. That provides the opportunity for downstream processing use multiple perspectives on an object to more precisely localize and identify it.
General thoughts:
I've been driving V9 AP2 for a few days now and I find the dynamics to be much improved over recent V8. Lateral control is tighter and it's been able to beat all the V8 failure scenarios I've collected over the last 6 months. Longitudinal control is much smoother, traffic handling is much more comfortable. V9's ability to prospectively do a visual evaluation on a target lane prior to making a change makes the auto lane change feature a lot more versatile. I suspect detection errors are way down compared to V8 but I also see that a few new failure scenarios have popped up (offramp / onramp speed control seem to have some bugs). I'm excited to see how this looks in a couple of months after they've cleaned out the kinks that come with any big change.
Being an avid observer of progress in deep neural networks my primary motivation for looking at AP2 is that it's one of the few bleeding edge commercial applications that I can get my hands on and I use it as a barometer of how commercial (as opposed to research) applications are progressing. Researchers push the boundaries in search of new knowledge, but commercial applications explore the practical ramifications of new techniques. Given rapid progress in algorithms I had expected near future applications might hinge on the great leaps in efficiency that are coming from new techniques. But that's not what seems to be happening right now - probably because companies can do a lot just by scaling up NN techniques we already have.
In V9 we see Tesla pushing in this direction. Inception V1 is a four year old architecture that Tesla is scaling to a degree that I imagine inceptions's creators could not have expected. Indeed, I would guess that four years ago most people in the field would not have expected that scaling would work this well. Scaling computational power, training data, and industrial resources plays to Tesla's strengths and involves less uncertainty than potentially more powerful but less mature techniques. At the same time Tesla is doubling down on their 'vision first / all neural networks' approach and, as far as I can tell, it seems to be going well.
As a neural network dork I couldn't be more pleased.
same
![](https://static.slo-tech.com/stili/avatar_gray.gif)
RedDrake ::
WarpedGone je izjavil:
Basically much too long, even if interesting ...
Se pravi, tipo pravi, da trenutni HW očitno ne bo dovolj za izkoristit tale "Neural net".
Kar mislim, da smo pridigali že dolgo.
Ampak mene ne zanimajo taki programerski detajli, se ukvarjam z bolj down-to-earth stvarmi (kako _zelo_ natančno napovedati vreme 15 min v naprej).
Vprašanje, ki ga imam je samo, "kdaj grem lahko z busom na izvir Soče, avto pa me pa sam (prazen) tja pride iskat, tam ob ene 21h-22h?".
A smo že tam?
Kdaj bomo tam?
A bo to 10, 20, 30 ali več let?
To me zanima, nič drugega.
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Okapi ::
Če bo čez 20 let, bo hitro.![;)](https://static.slo-tech.com/smeski/icon_wink.gif)
Čez 10 let bodo mogoče robotski taksiji, daljinsko nadzorovani iz centrale.
![;)](https://static.slo-tech.com/smeski/icon_wink.gif)
Čez 10 let bodo mogoče robotski taksiji, daljinsko nadzorovani iz centrale.
![](https://static.slo-tech.com/stili/bel_non_grata.png)
PrimoZ_ ::
@RedDrake
Drugo leto bo, ko W1 dobi svoj model3, da bo služil denar takrat ko avta ne bo sam uporabljal :)
Drugo leto bo, ko W1 dobi svoj model3, da bo služil denar takrat ko avta ne bo sam uporabljal :)
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Utk ::
Seveda je, za tiste, ki mislijo da je problem v povpraševanju.
Če bi bilo povpraševanje večje, kot je, bi se jih prodalo (in naredilo) več, kot se jih trenutno proda (in naredi)... po višjih cenah. In obratno.
Če zaradi kakršnega koli (nerazumnega) vzroka proizvajalci ne bi želeli postaviti tržne cene, ampak bi jih prodajali po nižji ceni od tržne, in ponudba ne bi mogla dosegati povpraševanja, bi to reguliral trg rabljenih EV-jev. In ni videti, da bi se rabljeni EV-ji v splošnem prodajali po cenah, ki so višje od cen novih EV-jev.
To enostavno ni res. Malokdo bo dal več za rabljen električni avto kot za novega, tudi če bi ga hotel. Take sile spet ni. Drugič, zanemarjaš en kup stvari v zvezi z nakupom avta, kot je kar velika zvestoba znamkam, pa mnenja sosedov, reklamiranje, itd. Če bi bilo več avtov na cesti, bi jih tudi več prodali. Več jih bi pa prodali, če jih bi bilo več v salonih. Življenje ni ekonomska teorija.
![](https://static.slo-tech.com/stili/avatar_gray.gif)
RedDrake ::
Aja?
A si vedu, da v tujini dobiš rabljen solidno opremljen Model S za ~30k EUR?
Sicer ma 500kkm, ampak baje v elektro avtu itak nima kaj crknit, ker elektronika dela za zmeri.
Kako to da se ta avto prodaja že mesece, če pa je tak avto, jasno kot beli dan, N-krat boljši od vsakega naziwagna ranga "passat" ali več (ki je nov, slabše opremljen in z dobesedno 4x manj moči, po vrhu vsega še dražji)?
A si vedu, da v tujini dobiš rabljen solidno opremljen Model S za ~30k EUR?
Sicer ma 500kkm, ampak baje v elektro avtu itak nima kaj crknit, ker elektronika dela za zmeri.
Kako to da se ta avto prodaja že mesece, če pa je tak avto, jasno kot beli dan, N-krat boljši od vsakega naziwagna ranga "passat" ali več (ki je nov, slabše opremljen in z dobesedno 4x manj moči, po vrhu vsega še dražji)?
Zgodovina sprememb…
- spremenil: RedDrake ()
![](https://static.slo-tech.com/stili/avatar.gif)
BigWhale ::
WarpedGone je izjavil:
Basically much too long, even if interesting ...
Se pravi, tipo pravi, da trenutni HW očitno ne bo dovolj za izkoristit tale "Neural net".
Kar mislim, da smo pridigali že dolgo.
To je cista laz. Baje je oprema prvih Tesel povsem dovolj za full self driving. Sam software upgrade rabjo! ;>
![](https://static.slo-tech.com/stili/avatar_gray.gif)
RedDrake ::
Pa, baje je MobilEye čist ok. No, ne, nVidia pa lastno znanje je top!
Ampak, nVidia tut ni, zdej. Prej je bla pa topšit.
Ampak, nVidia tut ni, zdej. Prej je bla pa topšit.
Zgodovina sprememb…
- spremenil: RedDrake ()
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Utk ::
Aja?
A si vedu, da v tujini dobiš rabljen solidno opremljen Model S za ~30k EUR?
Sicer ma 500kkm, ampak baje v elektro avtu itak nima kaj crknit, ker elektronika dela za zmeri.
Kako to da se ta avto prodaja že mesece, če pa je tak avto, jasno kot beli dan, N-krat boljši od vsakega naziwagna ranga "passat" ali več (ki je nov, slabše opremljen in z dobesedno 4x manj moči, po vrhu vsega še dražji)?
In kaj ima to s konjskimi dirkami?
![](https://static.slo-tech.com/stili/avatar_gray.gif)
celada ::
Daj ne nakladaj. V poročilu je jasno, da je Musk funding secured twittal v afektu.Če je bilo v afektu, še ne pomeni, da ni bilo nič na tem.
Ce bi blo realno ne bi blo 40 mil kazni.
Če bi blo tako enostavno, bi bil ti Musk.
Torej kaj je po tvoje kazen iz trte izvita? Musk ni uspel dokazat, da je njegova izjava imela trdne temelje in so ga zaradi tega opalili po zepu.
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Utk ::
Musk bi to kazen dobil tudi ce bi stokrat dokazal, da je imel funding secured. Ne bi je dobil samo, ce bi sel res v privat.
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Utk ::
Relavantnih je vseh 24 strani, ce hoces razumet zgodbo. Citiram ti pa lahko kak naslov iz slovenskih novic, tam ze vse pise za nekatere.
![](https://static.slo-tech.com/stili/bel_non_grata.png)
WarpedGone ::
VW CEO says German carmakers have only 50% chance of staying ahead
Kot vse PR izjave je tudi ta večkratno olepšana in maksimalno optimistična.
Kot vse PR izjave je tudi ta večkratno olepšana in maksimalno optimistična.
Zbogom in hvala za vse ribe
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Utk ::
Tega prehoda Nemci sploh niso sposobni naredit. V najboljšem primeru so vsaj toliko pametni, da bojo za to nalogo zaposlili tujce.
![](https://static.slo-tech.com/stili/avatar_gray.gif)
svit ::
Težave v Diesel svetu:
Opel bo moral vpoklicati 100.000 vozil zaradi goljufive elektronike pri izpustih
Volvo ima tudi težave z izpusti. Krivdo valijo na senzor, ki se pokvari hitreje od pričakovanj. Vedo pa, da to vpliva samo na tovornjake v Severni Ameriki in v Evropi...
Audi mora plačati 800mio EUR kazni zaradi goljufive elektronike pri izpustih
Mogoče ne bi bilo slabo, da bi počasi začeli računati, koliko ljudi je zaradi teh (neznanih) strupov umrlo in začeti soditi odgovornim ljudem za množične poboje.
Opel bo moral vpoklicati 100.000 vozil zaradi goljufive elektronike pri izpustih
Volvo ima tudi težave z izpusti. Krivdo valijo na senzor, ki se pokvari hitreje od pričakovanj. Vedo pa, da to vpliva samo na tovornjake v Severni Ameriki in v Evropi...
Audi mora plačati 800mio EUR kazni zaradi goljufive elektronike pri izpustih
Mogoče ne bi bilo slabo, da bi počasi začeli računati, koliko ljudi je zaradi teh (neznanih) strupov umrlo in začeti soditi odgovornim ljudem za množične poboje.
Ni ideje
Zgodovina sprememb…
- spremenilo: svit ()
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Nikec3 ::
WarpedGone je izjavil:
VW CEO says German carmakers have only 50% chance of staying ahead
Kot vse PR izjave je tudi ta večkratno olepšana in maksimalno optimistična.
Nemci se gredo PR kampanjo, ki v prevodu pomeni "želimo, da nam država financira prehod na EV-je".
@WarpedOne o Elonu Musku:
"ST inteligenca serijskemu izdelovalcu "čudežev" očita pomanjkanje inteligence"
"ST inteligenca serijskemu izdelovalcu "čudežev" očita pomanjkanje inteligence"
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Okapi ::
No, problema s samovoznostjo je konec. Musk pravi, da bodo imeli čez pol leta nov, revolucionarni procesor, ki bo lastnosti avtopilota izboljšal za 5 do 20x. In takoj zatem, predvidevam, pride vožnja od obale do obale, brez rok na volanu.
Eni sicer malo dvomijo.
Eni sicer malo dvomijo.
![;)](https://static.slo-tech.com/smeski/icon_wink.gif)
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Zheegec ::
Točno to, kar Tesla rabi. Nov čip, da bo omogočil autopilot funkcionalnost, katero za $3k-$5k doplačila prodajajo že leta in leta strankam.
"božja zapoved pravi; <Spoštuj očeta in mater>,
ne govori pa o spoštovanju sodstva."
Janez Janša, 29.04.2014
ne govori pa o spoštovanju sodstva."
Janez Janša, 29.04.2014
Zgodovina sprememb…
- spremenil: Zheegec ()
![](https://static.slo-tech.com/stili/avatar_gray.gif)
svit ::
Saj veš da bodo računalnik zamenjali brezplačno? In da so funkcionalnost FSD prodajali z "zvezdico", ki je strankam že vnaprej povedala, da trenutno ni omogočena?
Ni ideje
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Zheegec ::
Jap. Kot prvo bo to strošek za Teslo. Kot drugo, zelo dvomim da bo kaj iz tega čez "6 mesecev". Ampak bo pa sigurno šla delnica gor, kar je pač namen vseh teh novic in obljub.
Nagradno vprašanje: kateri "summer" je bil mišljen s strani Elona v tej njegovi izjavi?
But on Thursday, Elon Musk, chief executive of Tesla, took a big step in that direction when he announced that the maker of high-end electric cars would introduce autonomous technology by this summer. The technology would allow drivers to have their cars take control on what he called “major roads” like highways.
Nagradno vprašanje: kateri "summer" je bil mišljen s strani Elona v tej njegovi izjavi?
"božja zapoved pravi; <Spoštuj očeta in mater>,
ne govori pa o spoštovanju sodstva."
Janez Janša, 29.04.2014
ne govori pa o spoštovanju sodstva."
Janez Janša, 29.04.2014
Zgodovina sprememb…
- spremenil: Zheegec ()
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Truga ::
si predstavljas da bi kak avtar prodajal "preorderje" za instalacijo klime. "Vsak cajt bo ready, hardware mate ze not"
3 leta kasneje "vcak cajt bo ready, ampak boste morali peljat avto na servis".
ce bi se sel take bedarije kak renault ali volvo bi jih folk lincal, ko to dela tesla pa je vse ok :D
3 leta kasneje "vcak cajt bo ready, ampak boste morali peljat avto na servis".
ce bi se sel take bedarije kak renault ali volvo bi jih folk lincal, ko to dela tesla pa je vse ok :D
![](https://static.slo-tech.com/stili/avatar_gray.gif)
pegasus ::
Da.
Zato ker Tesla dela neki novega, kar še ne obstaja, renault ali volvo pa bi ti podtaknila kako kramo iz desetletje zapuščenega kitajskega skladišča.
Zato ker Tesla dela neki novega, kar še ne obstaja, renault ali volvo pa bi ti podtaknila kako kramo iz desetletje zapuščenega kitajskega skladišča.
![](https://static.slo-tech.com/stili/bel_non_grata.png)
Unknown_001 ::
Težave v Diesel svetu:
Opel bo moral vpoklicati 100.000 vozil zaradi goljufive elektronike pri izpustih
Volvo ima tudi težave z izpusti. Krivdo valijo na senzor, ki se pokvari hitreje od pričakovanj. Vedo pa, da to vpliva samo na tovornjake v Severni Ameriki in v Evropi...
Audi mora plačati 800mio EUR kazni zaradi goljufive elektronike pri izpustih
Mogoče ne bi bilo slabo, da bi počasi začeli računati, koliko ljudi je zaradi teh (neznanih) strupov umrlo in začeti soditi odgovornim ljudem za množične poboje.
Vsi so goljufali.
In potem ste se meni smejali in me ozmerjali z Nacijem, ko sem izpostavil da VW ni bil edini.
Wie nennt man einen Moderator mit der Hälfte des Gehirnis ?
Begabt
Begabt
Zgodovina sprememb…
- spremenilo: Unknown_001 ()
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Nikec3 ::
Vsi so goljufali.
In potem ste se meni smejali in me ozmerjali z Nacijem, ko sem izpostavil da VW ni bil edini.
Naci.
@WarpedOne o Elonu Musku:
"ST inteligenca serijskemu izdelovalcu "čudežev" očita pomanjkanje inteligence"
"ST inteligenca serijskemu izdelovalcu "čudežev" očita pomanjkanje inteligence"
![](https://static.slo-tech.com/stili/avatar_gray.gif)
Truga ::
![](https://static.slo-tech.com/stili/avatar_gray.gif)
svit ::
Ne vem kaj se ti zdaj usajaš. Pred nakupom FSD paketa je bilo ljudem jasno povedano, da ta funkcionalnost še ni na voljo (da bo pa kmalu). In tudi so imeli možnost tega ne nabavit. Pa so jo... v deležu več kot 70% od vseh kupcev Modela 3.
Ni ideje
![](https://static.slo-tech.com/stili/avatar_gray.gif)
nekikr ::
Kaj pomeni, da bo avtopilot 5x do 20x boljsi od tega kar je sedaj? Kako je to izmeril?
Je bilo že dostrikrat napisano. Avtopilot bo imel po avtocesti, v super duper avtu za 100.000€, vsaj 5x manj nesreč s smrtnim izzidom kot je sicer povprečje (vseh avtomobilov, v povprečju starih 10 let, vozečih se po vseh možnih vukojebinah). Clap, clap.
![](https://static.slo-tech.com/stili/bel_non_grata.png)
Unknown_001 ::
Kaj pomeni, da bo avtopilot 5x do 20x boljsi od tega kar je sedaj? Kako je to izmeril?
Je bilo že dostrikrat napisano. Avtopilot bo imel po avtocesti, v super duper avtu za 100.000€, vsaj 5x manj nesreč s smrtnim izzidom kot je sicer povprečje (vseh avtomobilov, v povprečju starih 10 let, vozečih se po vseh možnih vukojebinah). Clap, clap.
Bedna špekulacija. Na nekatere nesreče ne moreš vplivat.
Wie nennt man einen Moderator mit der Hälfte des Gehirnis ?
Begabt
Begabt
![](https://static.slo-tech.com/stili/avatar_gray.gif)