I wasn't trying to pick a fight - clearly it appears my positions are pretty changeable, so it would be a weirdly tricksy fight in any case - and thank you for flagging the twaddle I was talking.
FWIW I was attempting to say that although the idea and history of AI can't be uninvented (like colonialism etc) I'd rather that (AI and other) noxious expressions of human culture (eg colonialism etc) be made extinct than (implied but not actually stated) for AI to lead to the extinction of a living thing. And of course, you're right - the phrase "living thing" begs a lot of questions, although what constitutes life, sentience, cognition, and a theory of mind are all probably off-topic, and I regret introducing them.
It was a confused ill-expressed baked-take on matters, but thanks for helping me think it through. Whimsy sometimes overtakes me.
I think that makes more sense! I misunderstood that you were saying a failure to enact LLMs is equivalent to "killing" AI and making it extinct. Now, my understanding is more that you brought up extinction preemptively while thinking about the future of AI post-AGI, and I did not understand that it was an extrapolation of what *could* exist and be called AI rather than commentary on current AI.
For anyone who is not aware: the AI tools we have seen in the past few years are all large language models, or LLMs. I am NOT an expert and would ask anyone to please correct any misunderstandings I have as well! My understanding is that LLMs are essentially extremely advanced data aggregators, and by taking in massive quantities of text, they are able to predict what may come next in a particular string of characters. If you ask it a question, it does not actually "know" anything and has no understanding of context or language or sentience. Instead, the program looks at the words you used and compares that against their databases of millions and millions of articles and posts and questions and answers and every other interaction it has access to. Then, it spits out a response that is the most in line with what other, actually-sentient humans have previously written.
If you ask it why the sky is blue, it does not for any moment experience the sky. It is a glorified calculator that aggregates millions of Quora posts asking why the sky is blue and spits out an answer made up of all of those previous answers.
One of the reasons AI (current AI, or LLMs) is dangerous is because if the data it scrapes is incorrect, it
will spit out incorrect responses. You need reliable, good, accurate data scraped in order to get good results, and even then, responses can be missing tons of context. If you have skewed data for any reason, it can give you incorrect information. So if a bunch of people went on Twitter with bots and started posting false information, the AI that scrapes Twitter may eventually spit out that same misinformation. The AI does not know the difference between truth and fiction; it only know what words most often go together, and if those words are "vaccines cause autism", the AI will spit it out.
We are still some ways off from AGI, or artificial general intelligence. That would be an AI that actually comprehends, learns, understands, and generates new responses, rather than the tools we have now which only aggregate data in order to regurgitate responses--maybe the words have not been used in that exact way before, but it is like paraphrasing versus just saying a new sentence. AGI will be a whole different ethical can of worms and I hope I get to see that world someday so I can hang out with robot souls. The ethical concerns around LLM AIs have nothing to do with sapience and life; instead, it has everything to do with putting information control in the hands of very few people without ANY transparency and with a motive for generating profit above all else, and since the industry is new and without regulation, they can do basically whatever they want to generate that profit. If a bad actor--say, someone who has a vested interest in hurting national defense--wanted to, they could buy the personal data of millions of people, then use AI to find the people who have specific words and phrases and are most susceptible to blatantly false information. If your country relies on a education and informed populace to enact legislative change, then a tool like this which can be used for massive misinformation campaigns is deeply concerning.
None of this addresses that this is a form of automation that will impact white-collar, clerical laborers. The labor market over the past few centuries has changed drastically as automation has upped our productive capabilities by, like, a lot. Historically, blue-collar positions were being automated--factory workers and farmers can produce SO much more on SO much less total labor now. When America was founded, something like 1 in 3 working adults were farmers since you needed that many people to produce food. Our ability to grow food efficiently has increased SO much in the past several centuries that now, one farm in America--with all of their automated tools and tractors and sprinklers and combines and harvestors and and and--feeds roughly 166 people worldwide, and our population (and global populations) need far fewer farmers to produce the same amount of food. We can see similar examples all across "blue-collar" labor, since that has, historically, been the easiest to automate. Because that form of labor has been automated so much (and offshored, but I am specifically talking about productive capacity increases thanks to automation), we have a much lower population doing that work in most developed countries, and the labor population has at least partly moved to the clerical class. Those "white-collar" jobs are absolutely still just a form of labor, but it is much harder to automate something like providing legal advice than it is to automate production of, say, vehicles, and so the laborers who have long been able to labor without much fear of automation disrupting their positions are now panicking because a lot of that labor CAN suddenly be automated.
There are like a trillion other big, massive, sweeping things to think about with AI stuff. I am not involved in any of it. I am a regular human being going through my life as every other human is with billions of external things driving my every decision, and all I can do about it is exist. Things will change as they always have. Things will collapse and be created and expand and shrink and change as they always have. AI--or, the new tool that we have mislabeled as actual intelligence--will cause things to change in new, unexpected ways, which is what happens every time humans invent a new tool. I'm gonna take a hit now.