The biggest problem I have with this idea comes from my recent experiences over the past few days with GPT-3.5 in particular.
Things like not being able to remember last responses or prompts, just making up facts, or data being outdated (September 2021 for GPT-3.5) and needing to be updated. Until issues like that are less of an issue, I don’t foresee being actually usable for cities or really anything outside of maybe just generating nonsense or random code snippets.
Also, I have concerns you’d see this being taken and used against minorities to discriminate against them. Whether that’s intentional or not I can’t say.
AI they are talking about is most likely completely different than chatgpt.
They are likely labeling people “at risk” using some very reliable old-school ML algorithm, such as xgboost.
Biases are clearly a problem, but they are more manageable than human biases, because of the mathematical form that help finding and removing them. This is why for instance EU regulations force to have mathematical models in many area, to replace “human intuition”. Because mathematical models are better for customers.
They aren’t doing anything new, just calling it AI
Based on the sparse information in the article, they’re training the model based on actual data points, not just feeding the data in human-readable format to a LLM.
The biggest problem I have with this idea comes from my recent experiences over the past few days with GPT-3.5 in particular.
Things like not being able to remember last responses or prompts, just making up facts, or data being outdated (September 2021 for GPT-3.5) and needing to be updated. Until issues like that are less of an issue, I don’t foresee being actually usable for cities or really anything outside of maybe just generating nonsense or random code snippets.
Also, I have concerns you’d see this being taken and used against minorities to discriminate against them. Whether that’s intentional or not I can’t say.
AI they are talking about is most likely completely different than chatgpt.
They are likely labeling people “at risk” using some very reliable old-school ML algorithm, such as xgboost.
Biases are clearly a problem, but they are more manageable than human biases, because of the mathematical form that help finding and removing them. This is why for instance EU regulations force to have mathematical models in many area, to replace “human intuition”. Because mathematical models are better for customers.
They aren’t doing anything new, just calling it AI
Based on the sparse information in the article, they’re training the model based on actual data points, not just feeding the data in human-readable format to a LLM.