• @cygnus@lemmy.ca
    link
    fedilink
    English
    53 months ago

    Well, it does make sense in that the time during which we have AGI would be pretty short because AGI would soon go beyond human-level intelligence. With that said, LLMs are certainly not going to get there, assuming AGI is even possible at all.

    • @phdepressed@sh.itjust.works
      link
      fedilink
      English
      33 months ago

      We’re never getting AGI from any current or planned LLM and ML frameworks.

      These LLMs and ML programs are above human intelligence but only within a limited framework.