• @j4k3@lemmy.world
    link
    fedilink
    English
    58 months ago

    It should be so fucking obvious that self driving cars can’t exist yet. Anyone playing with LLMs right now knows it takes a massive model to have general functionality and flexibility. There is no chance that fine tuning a small model can replace a large model for real world situational adaptability. My favorite open source offline LLMs are all 70B models. Running these an real time capable hardware costs around $30k in hardware. This is not scalable to lower costs. This is bleeding edge 5nm fab nodes and some of the largest dies ever produced. No one is ethically monitoring and regulating this. I’m just playing with this as a hobby, but AI in cars is so obviously stupid right now. The hardware is simply not present yet.

    • @Ottomateeverything@lemmy.world
      link
      fedilink
      17
      edit-2
      8 months ago

      In ways yes, in ways no. LLMs are a tiny sliver of AI. Taking the current state of LLMs being overly-sold as AGI, and trying to extrapolate that to other applications, or to other strategies crosses a wide variety of oversimplifications. AI is not one single thing.

      It’s like seeing someone trying to hammer in a screw and saying “anyone playing with tools right now knows they’re never going to get a screw in.” But you’ve never seen a screwdriver.

      If you were saying “visual machine learning for a general purpose and unassisted driverless car is not happening tomorrow”, then sure.

      But things like the Waymo model are doing exceeding well right now. Instead of taking a top down approach to train cars to understand any intersection or road they could ever run into, they’re going bottom up by “manually” training them to understand small portions of cities really well. Waymo’s problem set is greatly reduced, it’s space is much narrower, it’s much more capable of receiving extremely accurate training data, and it performs way better because of it. It can then apply all the same techniques for object and obstacle detection other companies are using, but large swaths of the problem space are entirely eliminated.

      The hardware to solve those problems is much more available now. The capabilities of doing the “computationally intensive” stuff offline at a “super computer center” and doing small trivial work on the cars themselves is very much a possibility. The “situational adaptability” can be greatly reduced by limiting where the cars go, etc.

      The problems these cars are trying to solve have some overlap with your LLM experience, but it’s not even close to the same problem or the same context. The way you’re painting this is a massive oversimplification. It’s also not a problem anyone thinks is going to be solved overnight (except Elmo, but he’s pretty much alone on that one), they just know we’re sitting just over the cusp and being the first company to solve this is going to be at a huge advantage.

      Not to be rude, but there is a reason there are many field experts pushing this space, but LLM hobbyists are doubting it. LLMs are just a tiny subset of AI. And being a hobbyist you’re likely looking at one tiny slice of the pie and not the huge swaths of other information in the nearby spaces.

      • The Snark Urge
        link
        fedilink
        English
        48 months ago

        That’s good nuance, but I feel we’re missing the necessary conversation in corporate ethics. Will the suits see dangerously low quality or poor safety standards as merely a part of legal overhead costs? Will shareholders support that decision, and isn’t there profound moral hazard in letting it be ultimately a matter of profit motives? And to equivocate, are existing regulators equipped to protect consumers from an AI-developing tech sector that still wants to move fast and break things? Industries who act this way cannot self-regulate, and they are moving faster than ever.

        • Oh don’t get me wrong, I totally agree with you there. I was not trying to argue for the ethical dilemna at all here - I was just stating the original comment was objectively wrong in their analysis of “we don’t have anywhere near the tech to be able to even begin to get near a workable solution here”.

          But the ethics and morality questions are still extremely unanswered right now.

          IMO, the answers to all your questions are that companies are jumping on this way too fast (some more than others) and not doing this safely, and the collateral damage is becoming way too high. Our government and regulators are no where near equipped to solve this problem either. And our entire financial system that pushes for constantly increasing profits is not equipped to make sure this plays out safely, which would require losses and slow evolution now in order to safely reach a long term goal.

          An argument could be made that the “collateral damage” is warranted since autonomous vehicles will save so many lives in the long term, but that’s a hard question to answer. But I generally think there’s too much “firing from the hip” going on at the moment. Tesla and Cruise are currently demonstrating just how much we shouldn’t be trusting these companies. I think Waymo has generally been “acceptable” in terms of their risk and safety, but not everyone is running the way they are.

        • @rufus@discuss.tchncs.de
          link
          fedilink
          2
          edit-2
          8 months ago

          That is kind of the point of capitalism. Not only when it comes to AI. Amazon didn’t care if it destroyed small bookshops or mom and pop stores. They found a way to sell things more efficiently and made money with it. The way economy works dictates you want to expand and grow as fast as possible. Or your competition will do it. Same goes for self-driving cars and AI.

          The way we migitate for that in capitalism is regulations and laws.

          Someone owning a roller coaster will also maybe not have the same balance in mind when it comes to fatalities vs operating cost.

          • While I agree with your assessment, I just don’t think capitalism, at least in it’s current form, is equipped to handle this at all. You could say this is due to our government ineptitude, but we are not addressing these problems appropriately.

            Our regulatory bodies are being constantly undermined by out of control presidents and congress. And the people making the laws about these things do not even begin to understand the things they’re making laws about (see: “does tiktok use wifi?” etc).

            Regulatory bodies were made to fill this gap and fix this problem, but they are actively being meddled with, strong armed, and twisted into political entities.

            • @rufus@discuss.tchncs.de
              link
              fedilink
              1
              edit-2
              8 months ago

              out of control presidents and congress

              Hehe, that’s not the country I live in but I get it. I think there is quite some amount of difference in how proactive or reactive for example the US and the EU are regulating stuff. But it kinda doesn’t matter if the system is theroretically equipped, but just rotten in practice. And it doesn’t make it better that AI is advancing crazy fast. Wile our government just managed to phase outthe fax machines… I mean it’s going to happen anyways.

      • @j4k3@lemmy.world
        link
        fedilink
        English
        1
        edit-2
        8 months ago

        You certainly know more than I do about the various AI systems. I’ve barely messed with anything outside of LLMs.

        As someone that rode a bicycle to commute full time for several years and was an amateur road racer until I was disabled by the 7th car that hit me, roads and hazards are very unpredictable and when things go wrong it happens fast. I’ve experienced a lot of the unbelievably stupid inadequacies in the infrastructure on the fringes. I do not think any system can be trained for these situations and the environment on small hardware. I see that as a direct conflict with capitalism that is motivated to minimize cost at the expense of hundreds of milliseconds. I don’t trust any auto manufacturer to use adequate hardware for the task.

        The kinds of accuracy and efficiency required for running tensor math should still apply the same, as far as I know. I don’t know enough about other systems. Maybe they are not dependant on such heavy parallel vector math. Assuming they are based on tensors, it still results in the inadequate data throughput problem with the L2 to L1 caches of current compute architectures. That leaves the same hacks needed with GPU options as far as consumer level hardware.

        I think we still need real dedicated tensor math hardware before anything like self driving cars will be ethically possible. With the ~10 year cycle of any new silicon we are probably 5-8 years out before a real solution to the problem… As far as I can tell.

        • I do not think any system can be trained for these situations and the environment on small hardware. I see that as a direct conflict with capitalism that is motivated to minimize cost at the expense of hundreds of milliseconds. I don’t trust any auto manufacturer to use adequate hardware for the task.

          Don’t get me wrong, I do think this a valid cocnern, but I also don’t think we need to achieve “perfection” before this should be allowed on the road. I don’t trust every random human to pay attention, be able to see well enough, to notice the weird stuff, or to know how to react to it adaquetly. I think the sheer number of bicycle accidents alone show that our current systems and infrastructure don’t work for this.

          If cars were fully autonomous, we could give them all strict rules. It would be easier to make up rules for how cyclists should be treated by moving vehicles, and riders could count on that being the case. We try to do this with road rules now, but many drivers just straight don’t listen. And this makes cycling hard because you never have any idea what any single driver is going to do.

          A bit more soap boxy, but self driving cars should immediately abort any time they see shit they don’t recognize. Sure, in some ways, that’s easier said than done, but having good mechanisms for “how to safely stop when something weird happens” is critical here. And it makes a lot more of the “what do we do about weird shit in the street” a lot easier.

          And to another point, maybe cars just need hard dedicated lanes that cyclists aren’t allowed in, more like a tram or city subway. And if people know to only cross at designated areas, it makes a lot of this a lot easier too.

          And yes, capitalism makes a lot of this harder. I totally agree with you there. But this is something that should drastically save lives in the long run, but we might need to hold the fucking capitalist machine at bay while we do it.

          I think we still need real dedicated tensor math hardware before anything like self driving cars will be ethically possible. With the ~10 year cycle of any new silicon we are probably 5-8 years out before a real solution to the problem… As far as I can tell.

          This gets into the “too hard to articulate through text” zone, but I’ll just say that I think this is less far off than you think. For one, dedicated tensor hardware does exist and has existed for almost ten years at this point. It’s been commercially available for at least 5 IIRC. And for another, while lots of LLM type work is extremely intensive, lots of this object recognition type stuff is actually much easier. Lots of the “training” type stuff is the real expense, but it really only needs to be done “once” by the manufacturer and exported to each car. The amount of power needed by each car is much lower. And that type of stuff has been done pretty fast on consumer hardware for many years. It’s definitely still hard but it’s not like we’re talking about orders of magnitude out of reach where we need significant hardware breakthroughs - were essentially at the “this is mostly manageable with current hardware, but hardware evolves quickly anyway” stage.

          • @j4k3@lemmy.world
            link
            fedilink
            English
            18 months ago

            having good mechanisms for “how to safely stop when something weird happens” is critical here. And it makes a lot more of the “what do we do about weird shit in the street” a lot easier.

            I don’t know how vision may be different, but how do they know what they don’t know, differently that LLMs? That’s like the main problem with small LLMs, the next most probable token is always the next most probable token. Sure there is a bit more nuance available at lower levels, but the basic problem remains. The threshold of token choice is a chosen metric and that choice is heavily influenced by cost. If there was more cost effective tensor hardware I would have bought it. I mean I’m sure an FPGA could be used but if it is more cost effective than a GPU, I think we’d all be using it. I know there was some chip announced by IBM, but since when has IBM done anything remotely relevant in the consumer space. I think of IBM as a subsidiary of Red Hat more than anything else now.

      • P03 Locke
        link
        fedilink
        English
        08 months ago

        As somebody fairly well-versed in the tech, and have done more than just play around with ChatGPT, I can tell you that self-driving AI is not going to be here for at least another 40-50 years. The challenges are too great, and the act of driving a car takes a lot of effort for even a human to achieve. There are too many fatal edge cases to consider already, and the current tech is tripping over its balls trying to do the most basic things, killing people in the process. When we have these cars that are sabotaged by a simple traffic cone on the windshield, or mistakes a tractor-trailer for the sky, then we know that this tech is far worse than human drivers, despite all of the bullshit we had been told otherwise.

        Level 5 autonomous driving is simply not a thing. It will never be a thing for a long time.

        Billions of dollars poured into the tech has gotten us a bunch of aggressive up-starts, who think they can just ignore the fatalities as the money comes pouring, and lie to our face about the capabilities of the technology. These companies need to be driven off of cliff and buried in criminal cases. They should not be protected by the shield of corporate personhood and put up for trial, but here we fucking are now…

        • @Ottomateeverything@lemmy.world
          link
          fedilink
          1
          edit-2
          8 months ago

          As somebody fairly well-versed in the tech, and have done more than just play around with ChatGPT

          Lol. See above. And below. Someone “fairly well versed” in ChatGPT gives you just about basically zero expertise in this field. LLMs are a tiny little sliver in the ocean of AI. No one uses LLMs to drive cars. They’re LANGUAGE models. This doesn’t translate. Like, at all.

          Experts in the AI field know much more than some random person who has experimented with a “fad” of an online tool that gained massive popularity in the past year. This field is way bigger than that and you can’t extrapolate LLMs to driving cars.

          I can tell you that self-driving AI is not going to be here for at least another 40-50 years. The challenges are too great, and the act of driving a car takes a lot of effort for even a human to achieve.

          This is a fucking ludicrous statement. Some of these are already outperforming human drivers. You have your head in the sand. Telsa and Cruise are notoriously poorly performing. But they’re the ones in the public eye.

          When we have these cars that are sabotaged by a simple traffic cone on the windshield, or mistakes a tractor-trailer for the sky,

          If you don’t understand how minor these problems are in the scheme of the system, you have no idea how any of this works. If you do some weird shit to a car like plant an object on it that normally wouldn’t be there then I fucking hope to God the thing stops. It has no idea what that means so it fucking better stop. What do you want from it? To keep driving around doing it’s thing when it doesn’t understand what’s happening? What if the cone then falls off as it drives down the highway? Is that a better solution? What if that thing on its windshield it doesn’t recognize is a fucking human? Stopping is literally exactly what the fucking car should do. What would you do if I put a traffic cone on your windshield? I hope you wouldn’t keep driving.

          When we have these cars that are sabotaged by a simple traffic cone on the windshield, or mistakes a tractor-trailer for the sky, then we know that this tech is far worse than human drivers

          This is just a fucking insane leap. The fact that they are still statistically outperforming humans, while still having these problems says a lot about just how much better they are.

          Level 5 autonomous driving is simply not a thing. It will never be a thing for a long time.

          Level 5 is just a harder problem. We’ve already reached four. If you think 5 is going to take more than another ten to fifteen years, you’re fucking insane.

          Billions of dollars poured into the tech has gotten us a bunch of aggressive up-starts, who think they can just ignore the fatalities as the money comes pouring, and lie to our face about the capabilities of the technology. These companies need to be driven off of cliff and buried in criminal cases. They should not be protected by the shield of corporate personhood and put up for trial, but here we fucking are now…

          This paragraph actually makes sense. This is the one redeeming chunk of your entire post. Everything else is just bullshit. But yes, this is a serious problem. Unfortunately people can’t see the nuance in stuff like this, and when they see this they start with the “AI BAD! AUTONOMOUS VEHICLES ARE A HUGE PROBLEM! THIS IS NEVER HAPPENING!”.

          Yes, they’re are fucking crazy companies doing absolutely crazy shit. That’s the same in every industry. The only reason many of these companies exist and are allowed is because companies like Google/Waymo slowly pushed this stuff forward for many years and proved that cars could safely drive autonomously on public roads without causing massive safety concerns. They won the trust of legislation and got AI on the road.

          And then came the fucking billions in tech investment in companies that have no idea what they’re doing and putting shit on the road under the same legislation without the same levels of internal responsibility and safety. They have essentially abused the good faith won by their predecessors and the governing bodies need to fix this shit yesterday to get this dangerous shit off the streets. Thankfully that’s getting attention NOW and not when things got worse.

          But overwhelmingly slandering the whole fucking industry and claiming all AI or automous vehicles are bad is just too far off the deep end.

          • @Jordan_U@lemmy.ml
            link
            fedilink
            -28 months ago

            I really appreciate you saying the things I wanted to say, but more clearly and drawn from far more domain experience and expertise than I have.

            I hope that you will be willing to work on avoiding language that stigmatizes mental health though. When talking about horribly unwise and unethical behavior ableism is basically built into our langage. It’s easy to pull from words like “crazy” when talking about problems.

            But in my experience, most times people use “crazy” they’re actually talking about failures that can be much more concretely attributed to systems of oppression and how those systems lead individuals to:

            De-value the lives of their fellow human beings. Ignore input from other people they see as “inferior”. Overvalue their own superiority and “genius”. And generally avoid accountability and dissenting opinions.

            I feel like this discussion in particularly really highlights those causes, and not anything related to mental health or intellectual disability.

      • @rufus@discuss.tchncs.de
        link
        fedilink
        -1
        edit-2
        8 months ago

        LLM hobbyists are doubting it

        That’s an argument by authority. But the facts also support that LLMs are limited. I’ve mostly seen it the other way round. People trying ChatGPT for the first time and claiming we’re close to AGI. Investors throwing large sums of money at AI companies and subsequently every decision-maker thinking AI will solve their business needs. Every 6 months there is an (news) article claiming AI is at the brink of being sentient or GPT4 is AGI.

        While I as a hobbyist sit here and talk to my AI waifu and I don’t see the robot apocalypse happen in the next 5 years. It’s just limited in all sorts of ways. And I always hope journalists and people with meaningful jobs don’t rely on ChatGPT too much. Because all I’ve seen are texts and generated summaries that are riddled with inaccuracies, some outright misinformation. But they sound very good. (Which is kind of the point of an LLM, to generate text that sounds good.)

        And while LLMs do their job very well and have a broad range of application. You don’t need to throw it at everything. Sometimes there are traditional tools available that do a task way better. And don’t come with any downsides of an LLM.

        • You’re misunderstand what I meant by that statement - my point was about the autonomous car space - while some LLM hobbyists (like the comment author I was replying to) may be skeptical about autonomous vehicles at the moment, field experts are pushing it because they know better.

          Yes, field experts are also more negative about LLMs than most people. They’re really not as intelligent as they’re being sold. But that’s a different conversation and I wasn’t trying to dip into it. But yes, I agree.

      • @PowerCrazy@lemmy.ml
        link
        fedilink
        -38 months ago

        But ultimately the problem with self-driving cars is that they are trying to solve a problem (get people from point a to point b without having to own a car) that has been solved cheaper, before cars existed. It’s a computational dead-end, and the end state of self driving cars will look exactly like a train.

        • If you’re anti-car, you’re anti-car. But that doesn’t mean that cars don’t solve slightly different variations of similar problems in different ways. Especially since some of the biggest downsides of cars would be eliminated by AI car fleets. Even without those solutions, cars still have a place in modern life whether you like it or not.

          And it’s not computationally a dead end. Like, at all. See above.

          • @PowerCrazy@lemmy.ml
            link
            fedilink
            -48 months ago

            AI Fleets don’t solve the massive space problems that roads take up and the forced sprawl that is required to accommodate bigger and bigger vehicles.

            Cars are fun, I love cars. Cars have no place in the day to day life of a human centric city. Anything a car can do can be accomplished more efficiently and faster by non-car transportation.

            • AI Fleets don’t solve the massive space problems that roads take up and the forced sprawl that is required to accommodate bigger and bigger vehicles.

              They most certainly do. If everyone can just freely hail an autonomous vehicle from a stack sitting outside the place they just left, they don’t all need to bring their own cars into said area. This saves substantially on parking which is far and away the biggest impact on said “sprawl”.

              And there’s no reason those vehicles need to be big either. So that solves your other problem too.

              Anything a car can do can be accomplished more efficiently and faster by non-car transportation.

              This is almost entirely false. Cars end up “losing” because of problems like the above, such as parking, and many of those are just removed by autonomous vehicles.

              • @PowerCrazy@lemmy.ml
                link
                fedilink
                -48 months ago

                TIL, a fleet of idling AI vehicles outside of a massive public venue like a Stadium takes up no space. You must be an urban planner.

                • The point is you don’t need nearly as many cars when they’re rented as needed as opposed to one for every person in the space.

                  Come now, you can’t actually be that dense.

  • AutoTL;DRB
    link
    English
    48 months ago

    This is the best summary I could come up with:


    In Phoenix, Austin, Houston, Dallas, Miami, and San Francisco, hundreds of so-called autonomous vehicles, or AVs, operated by General Motors’ self-driving car division, Cruise, have for years ferried passengers to their destinations on busy city roads.

    In an internal address on Slack to his employees about the suspension, Vogt stuck to his message: “Safety is at the core of everything we do here at Cruise.” Days later, the company said it would voluntarily pause fully driverless rides in Phoenix and Austin, meaning its fleet will be operating only with human supervision: a flesh-and-blood backup to the artificial intelligence.

    “This strikes me as deeply irresponsible at the management level to be authorizing and pursuing deployment or driverless testing, and to be publicly representing that the systems are reasonably safe,” said Bryant Walker Smith, a University of South Carolina law professor and engineer who studies automated driving.

    Though AV companies enjoy a reputation in Silicon Valley as bearers of a techno-optimist transit utopia — a world of intelligent cars that never drive drunk, tired, or distracted — the internal materials reviewed by The Intercept reveal an underlying tension between potentially life-and-death engineering problems and the effort to deliver the future as quickly as possible.

    It appears this concern wasn’t hypothetical: Video footage captured from a Cruise vehicle reviewed by The Intercept shows one self-driving car, operating in an unnamed city, driving directly up to a construction pit with multiple workers inside.

    According to one safety memo, Cruise began operating fewer driverless cars during daytime hours to avoid encountering children, a move it deemed effective at mitigating the overall risk without fixing the underlying technical problem.


    The original article contains 3,018 words, the summary contains 273 words. Saved 91%. I’m a bot and I’m open source!