People fundamentally fail to understand what AI is useful for and what it is doing. It is not anything like an Artificial General Intelligence. It is like a better way to search for information and interface with it. Just use open source offline AI, not the proprietary crap. The real issue is not what the AI can create. This is no different than what a person is capable of when they are aware of the same content, albeit code, art, music, etc. Just because I am inspired by something, due to my awareness does not give the original inspirational source a right to my thoughts or products. AI works at the same level. It is an aggregate of all content, but contains none of the original works any more than a person that knows about the paintings and works of an artist and tries to paint something in a similar style.
The real issue that people fail to talk about is that AI can synthesize an enormous amount of data about a person after prolonged engagement. This is like open port access directly into your subconscious brain and there are plenty of levers and switches it can twist and toggle. Giving this kind of interpersonal access to a proprietary stalkerware system where parts of humans are whored out to the highest bidder for exploitation, that is totally insane. This type of data can manipulate people in a way that will sound like science fiction until it normalizes. Proprietary AI is criminal in its potential to manipulate and exploit especially in the political sphere.
There is nothing in LLMs that is able to verify the truth. They should not be used for accurate information unless we make some sort of technological breakthrough on that front. It’s really good at generating plausible text though.
People, and the internet are no different. The vast majority of information that exists is incomplete or wrong at some level. Skepticism is always required but assessing any medium by its performance without premeditated bias is the only intelligent approach that can grow with improving technology. Very few people are running the larger models (like a 65B or larger) that they fully control in an environment where they control every aspect of the LLM. I have such a setup on my hardware running offline. On its own my system is ~95% accurate on the tasks I use it for and it is more accurate at these than results I find when searching the internet.
There are already open source offline models specifically designed to work on scientific white paper archives where every result cites the source from its database.
Agents are a class of AI where the AI is running a multi-model system and where one model can send the prompt to more specialized models or a series of models equip to check and verify a response and do things like cite sources or verify against a database.
People fundamentally fail to understand what AI is useful for and what it is doing. It is not anything like an Artificial General Intelligence. It is like a better way to search for information and interface with it. Just use open source offline AI, not the proprietary crap. The real issue is not what the AI can create. This is no different than what a person is capable of when they are aware of the same content, albeit code, art, music, etc. Just because I am inspired by something, due to my awareness does not give the original inspirational source a right to my thoughts or products. AI works at the same level. It is an aggregate of all content, but contains none of the original works any more than a person that knows about the paintings and works of an artist and tries to paint something in a similar style.
The real issue that people fail to talk about is that AI can synthesize an enormous amount of data about a person after prolonged engagement. This is like open port access directly into your subconscious brain and there are plenty of levers and switches it can twist and toggle. Giving this kind of interpersonal access to a proprietary stalkerware system where parts of humans are whored out to the highest bidder for exploitation, that is totally insane. This type of data can manipulate people in a way that will sound like science fiction until it normalizes. Proprietary AI is criminal in its potential to manipulate and exploit especially in the political sphere.
There is nothing in LLMs that is able to verify the truth. They should not be used for accurate information unless we make some sort of technological breakthrough on that front. It’s really good at generating plausible text though.
People, and the internet are no different. The vast majority of information that exists is incomplete or wrong at some level. Skepticism is always required but assessing any medium by its performance without premeditated bias is the only intelligent approach that can grow with improving technology. Very few people are running the larger models (like a 65B or larger) that they fully control in an environment where they control every aspect of the LLM. I have such a setup on my hardware running offline. On its own my system is ~95% accurate on the tasks I use it for and it is more accurate at these than results I find when searching the internet.
There are already open source offline models specifically designed to work on scientific white paper archives where every result cites the source from its database.
Agents are a class of AI where the AI is running a multi-model system and where one model can send the prompt to more specialized models or a series of models equip to check and verify a response and do things like cite sources or verify against a database.