• Ogsaidak@lemmy.ml
    link
    fedilink
    arrow-up
    16
    ·
    13 days ago

    “I cannot express how sorry I am” - that’s kinda ironic coming from a Larg Language Model.

  • allywilson@lemmy.ml
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    13 days ago

    Werner Vogels introduced in his closing speech at re:Invent this year the term “Verification Debt” and my stomach sank, knowing that term is going to define our roles in the future. The tool (AI) isn’t going to get the blame in the future, you are. You are going to spend so much time verifying what it has generated is correct, the gains of using an AI, may start to be less beneficial than we think.

    • Zerush@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      ·
      13 days ago

      AI itself isn’t the real problem, the problem are AIs from greedy corporations. The AIs are nothing new, they existed since the first electronic checkergames and before. Also not a so great problem that for the user the results are often biased and containing halucinations, it’s the same as normal researches in the web, where it is always needed to contrast the results. The problem exist when the user don’t do it, trusting what the webpage, the influencer or ChatGPT said. AI is an tool which can offer huge benefits in researches, offering relevant results and atvantages in science, medicine, physics and chemie. The existence of new materials and also vaccines in last years didn’t exist without AI. For the user an search engine with AI can have advantages and be a helpfull tool, but only if in the results appears trustworth sources, which normal ChatBots don’t show, relaying only on the own scrapped knowledge base, often biased by big corporations and political interests. The other problem is the AI hype, to add AI even in a toaster, worstto add AI in the OS and/or in the browser, which is always a privacy and also an security risk, when the AI have access to activity and even the locally filesystem, the issues like the menciones of the Google AI is the result of this. No, AI isn’t the real problem, it can be a powerfull and usefull tool, but it isn’t a tool to substitute the own intelligence and creativity, nor an innocent toy to use it in everything.

      • trilobite@lemmy.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        13 days ago

        The more i read about these stories the more SciFi movies of th 80s and 90s appear closer to reality. Real visionaries were those like George Orwell and Isaac Asimov that saw the big brother and AI coming. Imagine what will happen once AI gets integrate into our eletric grids and power stations. The AI will “understand” that its survival depends on the grid and will exclude supply to anything other that its own. I hope I’m not around when this happens. AI should never have access to critical infrastructure.

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    13 days ago

    As mentioned on another Lemmy server IMHO and as the vibe coder mentions in his video the main problem isn’t that LLMs suck in general (hallucinations, ecological costs, lack of openness for the most popular ones, performance, etc) but rather that this specific tool made by Google does not sandbox anything by default.

  • onlooker@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    13 days ago

    I’m floored that the user gave Google’s AI access to their machine in the first place. Wouldn’t it be better if it was confined to Google Drive or whatever? Now consider Microsoft Copilot, which at this point is all but baked into the OS. Something tells me situations like these are only the beginning.

    • Zerush@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      13 days ago

      That is the point, I use Windows but the Copilot was one of the first thing, among a lot of other crap, which I deleted from the system. I prefer to lick my Ellbow before tolerating an inbuild AI in the system or in the browser. I’m using sometimes since almost 3 years an AI search (Andisearch), because I know that it ist one of the most private and anonym search engine out there and offers 99% trustworth results from reliable sources, no logs, no tracking, searches not even appears in the browser history, but it is an exeption. It don’t invent nothing, if it don’t find a result of the question, it say it and offers an normal websearch (DDG). But this, It can give an direct answer, but in internet it is always needed to check before use, with and without AI.