Distilling the collective wisdom of FC using AI

Perfect_Speed4069

I am the beetle in a box that only you can see
There's a huge amount of knowledge buried in the pages of FC - thanks to the generosity, obsession, enthusiasm and kindness of the odd-bods that hang out round here.

Also, after wasting weeks of my life here, I know I've yet to absorb everything I've read, and I know I'm not the only person to have completely forgotten things I've written, commented upon, or even argued about.

Our favourite threads run into hundreds of pages, making FC more of an apprenticeship than an encyclopedia.

Now this might be the new batch of Red Lebanese talking, but as I understand it, it's possible to train Open AI using large amounts of text.

I have zero technical knowledge, but a shedload of enthusiasm if anyone is interested in making this a thing.
 

Rodney

Well-Known Member
I love this idea and asked a similar question on another forum I use. Like it would be possible just need some guy who knows how to do it and would take a while for it to all upload and then be analysed I am sure.

Might need a premium AI account thing to work with such large set of data.

Someone must know here how to do this then do it and it will figure out who all the people using weed on countries its illegal in then we can all get an email demanding £1 million Ugandan dollars or we are getting all this talk of using weed reported LMAO :D

I am not in Iran so i do not mind too much :)
 
Rodney,
  • Like
Reactions: budski

maremaresing

Well-Known Member
I have zero technical knowledge, but a shedload of enthusiasm if anyone is interested in making this a thing.

I have the technical knowledge, but a shedload of revulsion for the farce that is "AI". So I will not help you and will instead focus my efforts on hampering your every move.

What I mean to say is:

"Outline a plan for an IT Professional entering a villain arc on her vaporizor forum to stop AI implementation. Assume her arch nemesis is bad data and lazy plagiarism passed off as well read original commentary to readers who lack the critical thinking skills to observe falsehoods. This arch nemises has recently broken the bonds of middle management liquid lunches into her hobbies. Make this plan include a top level roadmap as pdf as well as detailed steps in a seperate open source file format document."



And in case you are curious, here is the plan I wrote up myself with my own thoughts and wordplay. I hope you like reading this original work!

Villainous IT Professional’s Plan to Thwart AI Implementation


Introduction


Our IT professional, code-named “Binary Saboteur,” is on a mission to dismantle the nefarious AI implementation orchestrated by her arch-nemesis, “Data Doppelgänger.” Armed with her vaporizer forum and a penchant for subterfuge, she’ll expose the plagiarized falsehoods and lead her readers toward critical thinking enlightenment.


Top-Level Roadmap


  1. Infiltration and Reconnaissance
    • Objective: Gather intelligence on Data Doppelgänger’s AI initiatives.
    • Steps:
      • Create a pseudonymous account on Data Doppelgänger’s vaporizer forum.
      • Engage in discussions, subtly probing for information.
      • Observe patterns, identify key players, and map out the AI landscape.
  2. Data Disruption
    • Objective: Undermine Data Doppelgänger’s data sources.
    • Steps:
      • Identify critical data pipelines feeding into AI models.
      • Introduce noise, inject misleading data, or alter key features.
      • Document changes and monitor AI performance degradation.
  3. Algorithmic Anarchy
    • Objective: Sabotage AI algorithms.
    • Steps:
      • Study Data Doppelgänger’s favored algorithms (e.g., neural networks, decision trees).
      • Introduce subtle modifications (e.g., biased weight adjustments, altered activation functions).
      • Monitor AI output deviations and exploit them for chaos.
  4. Plagiarism Paralysis
    • Objective: Expose Data Doppelgänger’s plagiarism.
    • Steps:
      • Analyze her forum posts and compare them to existing literature.
      • Compile evidence of copied content, false claims, and lazy commentary.
      • Publish an exposé, revealing her lack of originality.
  5. Cognitive Awakening
    • Objective: Awaken critical thinking in readers.
    • Steps:
      • Craft engaging posts that highlight logical fallacies and misinformation.
      • Encourage readers to question sources, verify claims, and think independently.
      • Organize virtual debates, pitting readers against AI-generated arguments.
  6. Liquid Lunch Leak
    • Objective: Exploit Data Doppelgänger’s weakness.
    • Steps:
      • Befriend her at industry events, subtly encouraging her liquid lunch habit.
      • Record her drunken rants about AI implementation.
      • Leak embarrassing audio snippets to the forum, eroding her credibility.

Detailed Steps (Open Source Document)


  • Infiltration and Reconnaissance
    • Create a burner email account.
    • Join Data Doppelgänger’s forum.
    • Observe forum threads, noting AI-related discussions.
    • Compile a list of influential forum members.
  • Data Disruption
    • Identify data sources (e.g., scraped websites, user-generated content).
    • Develop scripts to inject noise or manipulate data.
    • Monitor AI model performance metrics.
    • Document changes and anomalies.
  • Algorithmic Anarchy
    • Study AI algorithms (Python, TensorFlow, PyTorch).
    • Modify weights, biases, or hyperparameters.
    • Run experiments in a sandbox environment.
    • Keep a log of alterations.
  • Plagiarism Paralysis
    • Analyze Data Doppelgänger’s posts.
    • Cross-reference with academic papers, blogs, and other sources.
    • Compile evidence (screenshots, text comparisons).
    • Draft a detailed exposé.
  • Cognitive Awakening
    • Write thought-provoking articles.
    • Create infographics debunking common AI myths.
    • Organize online workshops on critical thinking.
    • Foster a community of discerning readers.
  • Liquid Lunch Leak
    • Attend industry events.
    • Record conversations discreetly.
    • Edit audio snippets for maximum impact.
    • Anonymously share the leaks on the forum.

Conclusion


Binary Saboteur’s plan combines technical prowess, investigative cunning, and a dash of liquid lunch espionage. As she unveils Data Doppelgänger’s deception, the vaporizer forum will transform into a bastion of truth, thwarting AI implementation one post at a time.
 
Last edited:

Perfect_Speed4069

I am the beetle in a box that only you can see
I have the technical knowledge, but a shedload of revulsion for the farce that is "AI". So I will not help you and will instead focus my efforts on hampering your every move.

What I mean to say is:

"Outline a plan for an IT Professional entering a villain arc on her vaporizor forum to stop AI implementation. Assume her arch nemesis is bad data and lazy plagiarism passed off as well read original commentary to readers who lack the critical thinking skills to observe falsehoods. This arch nemises has recently broken the bonds of middle management liquid lunches into her hobbies. Make this plan include a top level roadmap as pdf as well as detailed steps in a seperate open source file format document."



And in case you are curious, here is the plan I wrote up myself with my own thoughts and wordplay. I hope you like reading this original work!
I read it, and this ends here.

It's true, you can't uninvent an idea, but you can decide things like colonialism, incest and the comedy of Roy "Chubby" Brown are better candidates for extinction than any living thing.

Thank you for your thoughtful and considered reply.
 

florduh

Well-Known Member
Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

ec2
 

Perfect_Speed4069

I am the beetle in a box that only you can see

bellona0544

Well-Known Member
I read it, and this ends here.

It's true, you can't uninvent an idea, but you can decide things like colonialism, incest and the comedy of Roy "Chubby" Brown are better candidates for extinction than any living thing.

Thank you for your thoughtful and considered reply.
Do you think that AI is alive? Like, is your assumption or understanding about current LLMs that they are building toward sapience, and is that why you made the comment about the extinction of a living thing? Otherwise, I can't figure that one out.
 

Perfect_Speed4069

I am the beetle in a box that only you can see
Do you think that AI is alive? Like, is your assumption or understanding about current LLMs that they are building toward sapience, and is that why you made the comment about the extinction of a living thing? Otherwise, I can't figure that one out.
I wasn't trying to pick a fight - clearly it appears my positions are pretty changeable, so it would be a weirdly tricksy fight in any case - and thank you for flagging the twaddle I was talking.

FWIW I was attempting to say that although the idea and history of AI can't be uninvented (like colonialism etc) I'd rather that (AI and other) noxious expressions of human culture (eg colonialism etc) be made extinct than (implied but not actually stated) for AI to lead to the extinction of a living thing. And of course, you're right - the phrase "living thing" begs a lot of questions, although what constitutes life, sentience, cognition, and a theory of mind are all probably off-topic, and I regret introducing them.

It was a confused ill-expressed baked-take on matters, but thanks for helping me think it through. Whimsy sometimes overtakes me.
 

bellona0544

Well-Known Member
I wasn't trying to pick a fight - clearly it appears my positions are pretty changeable, so it would be a weirdly tricksy fight in any case - and thank you for flagging the twaddle I was talking.

FWIW I was attempting to say that although the idea and history of AI can't be uninvented (like colonialism etc) I'd rather that (AI and other) noxious expressions of human culture (eg colonialism etc) be made extinct than (implied but not actually stated) for AI to lead to the extinction of a living thing. And of course, you're right - the phrase "living thing" begs a lot of questions, although what constitutes life, sentience, cognition, and a theory of mind are all probably off-topic, and I regret introducing them.

It was a confused ill-expressed baked-take on matters, but thanks for helping me think it through. Whimsy sometimes overtakes me.
I think that makes more sense! I misunderstood that you were saying a failure to enact LLMs is equivalent to "killing" AI and making it extinct. Now, my understanding is more that you brought up extinction preemptively while thinking about the future of AI post-AGI, and I did not understand that it was an extrapolation of what *could* exist and be called AI rather than commentary on current AI.

For anyone who is not aware: the AI tools we have seen in the past few years are all large language models, or LLMs. I am NOT an expert and would ask anyone to please correct any misunderstandings I have as well! My understanding is that LLMs are essentially extremely advanced data aggregators, and by taking in massive quantities of text, they are able to predict what may come next in a particular string of characters. If you ask it a question, it does not actually "know" anything and has no understanding of context or language or sentience. Instead, the program looks at the words you used and compares that against their databases of millions and millions of articles and posts and questions and answers and every other interaction it has access to. Then, it spits out a response that is the most in line with what other, actually-sentient humans have previously written.

If you ask it why the sky is blue, it does not for any moment experience the sky. It is a glorified calculator that aggregates millions of Quora posts asking why the sky is blue and spits out an answer made up of all of those previous answers.

One of the reasons AI (current AI, or LLMs) is dangerous is because if the data it scrapes is incorrect, it will spit out incorrect responses. You need reliable, good, accurate data scraped in order to get good results, and even then, responses can be missing tons of context. If you have skewed data for any reason, it can give you incorrect information. So if a bunch of people went on Twitter with bots and started posting false information, the AI that scrapes Twitter may eventually spit out that same misinformation. The AI does not know the difference between truth and fiction; it only know what words most often go together, and if those words are "vaccines cause autism", the AI will spit it out.

We are still some ways off from AGI, or artificial general intelligence. That would be an AI that actually comprehends, learns, understands, and generates new responses, rather than the tools we have now which only aggregate data in order to regurgitate responses--maybe the words have not been used in that exact way before, but it is like paraphrasing versus just saying a new sentence. AGI will be a whole different ethical can of worms and I hope I get to see that world someday so I can hang out with robot souls. The ethical concerns around LLM AIs have nothing to do with sapience and life; instead, it has everything to do with putting information control in the hands of very few people without ANY transparency and with a motive for generating profit above all else, and since the industry is new and without regulation, they can do basically whatever they want to generate that profit. If a bad actor--say, someone who has a vested interest in hurting national defense--wanted to, they could buy the personal data of millions of people, then use AI to find the people who have specific words and phrases and are most susceptible to blatantly false information. If your country relies on a education and informed populace to enact legislative change, then a tool like this which can be used for massive misinformation campaigns is deeply concerning.

None of this addresses that this is a form of automation that will impact white-collar, clerical laborers. The labor market over the past few centuries has changed drastically as automation has upped our productive capabilities by, like, a lot. Historically, blue-collar positions were being automated--factory workers and farmers can produce SO much more on SO much less total labor now. When America was founded, something like 1 in 3 working adults were farmers since you needed that many people to produce food. Our ability to grow food efficiently has increased SO much in the past several centuries that now, one farm in America--with all of their automated tools and tractors and sprinklers and combines and harvestors and and and--feeds roughly 166 people worldwide, and our population (and global populations) need far fewer farmers to produce the same amount of food. We can see similar examples all across "blue-collar" labor, since that has, historically, been the easiest to automate. Because that form of labor has been automated so much (and offshored, but I am specifically talking about productive capacity increases thanks to automation), we have a much lower population doing that work in most developed countries, and the labor population has at least partly moved to the clerical class. Those "white-collar" jobs are absolutely still just a form of labor, but it is much harder to automate something like providing legal advice than it is to automate production of, say, vehicles, and so the laborers who have long been able to labor without much fear of automation disrupting their positions are now panicking because a lot of that labor CAN suddenly be automated.

There are like a trillion other big, massive, sweeping things to think about with AI stuff. I am not involved in any of it. I am a regular human being going through my life as every other human is with billions of external things driving my every decision, and all I can do about it is exist. Things will change as they always have. Things will collapse and be created and expand and shrink and change as they always have. AI--or, the new tool that we have mislabeled as actual intelligence--will cause things to change in new, unexpected ways, which is what happens every time humans invent a new tool. I'm gonna take a hit now.
 

chillAtGVC

Well-Known Member
#bellona0544 that is spot on according to what I know. LLM is just glorified pattern matching. It is going to be very disruptive in some labour markets but is not anywhere close to actually intelligence, reasoning, or understanding.
 

Hippie Dickie

The Herbal Cube
Manufacturer
i am kind of annoyed with the current "AI". i have been trying to get it to handle some of my use cases, but it seems to be throttled. only seems to support entertainment - videos, music, emails. i can't get it (Copilot) to look at my code base. i can shove one file at a time into the prompt window, but that is hardly effective.

i have had some great discussions with ChatGPT about biology and culture, but pure entertainment, not productivity.

i wonder if the enterprise seats get better access? i'll bet Palantir has some pretty spiffy AI.

and my burning question: where is the LLM that has been trained on all the medical studies, clinical drug trials, pre-prints, peer reviewed.
 
Hippie Dickie,

Farid

Well-Known Member
I don't know anything about how it works on the back end, but I do know that AI tools have totally revolutionized how I use MS excel.

In the past, complicated functions would need to be manually written, and manually modified. Same with VBA script. Now I can make dashboards and files that have so many capabilities that never would have been possible.
 

chillAtGVC

Well-Known Member
I wonder if code needs a different input processor as the mainstream AI is trained on human languages.
 
chillAtGVC,

Hippie Dickie

The Herbal Cube
Manufacturer
I don't know anything about how it works on the back end, but I do know that AI tools have totally revolutionized how I use MS excel.

In the past, complicated functions would need to be manually written, and manually modified. Same with VBA script. Now I can make dashboards and files that have so many capabilities that never would have been possible.
interesting. i have an excel sheet(s) that generate and export GCODE as a text file. do you think the AI (do you use Copilot) could turn this into a web app?
 
Hippie Dickie,

Hippie Dickie

The Herbal Cube
Manufacturer
I wonder if code needs a different input processor as the mainstream AI is trained on human languages.
the top AIs now have extensive coding capabilities - or so they say. it certainly helps with understanding visual studio code syntax and structure. it has been about 20 years since i was current with MS programming tools. this helps navigate the docs to teach me what i need to do. but i want more.
 
Hippie Dickie,

Farid

Well-Known Member
interesting. i have an excel sheet(s) that generate and export GCODE as a text file. do you think the AI (do you use Copilot) could turn this into a web app?

If that's a normal capability of excel, then yes Copilot or chatGPT could provide the necessary code provided you explain exactly what you're trying to do.

For complicated applications, it will not get it right on the first time. You will need to provide details on why it failed, and it will modify the response accordingly. If you provide a bad description the first time you may need to restart.
 

Hippie Dickie

The Herbal Cube
Manufacturer
just came across this recent O'Reilly article about LLM lessons. still reading but it looks pertinent.


i am trying to do too many things at once and getting pissed off that i can't get anything done. my excel sheet is for running the CNC router for making my vaporizer. the LLM stuff is for a digital archive of images that i have been working with since 2001. but all the AI stuff is after the digital archive is delivered. busy boy, i am.
 
Hippie Dickie,
  • Like
Reactions: sedentree

Grass Yes

Yes
Staff member
i can't get it (Copilot) to look at my code base. i can shove one file at a time into the prompt window, but that is hardly effective.
Is it missing support for your editor/IDE? I found copilot to be pretty helpful integrated into vim, but I've since switched to codeium (not to be confused with codium).

and my burning question: where is the LLM that has been trained on all the medical studies, clinical drug trials, pre-prints, peer reviewed.
I had hoped to see this come out of the Allen Institute for AI, given their Semantic Scholar product and their new open source LLMs. Seems likely they are working on this behind the scenes, if the S2 project is still under active development.
 

Hippie Dickie

The Herbal Cube
Manufacturer
Is it missing support for your editor/IDE? I found copilot to be pretty helpful integrated into vim, but I've since switched to codeium (not to be confused with codium).
it is fully integrated into VS Code, via github. i can use the sidebar chat and do the inline code chat.

i used to do #include under classic ASP, but that has been replaced under ASP.NET - seems to be a share problem but i can't prompt a solution to the problem.
 
Hippie Dickie,
Top Bottom