Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
IT IS NOT OUR FAULT WE ARE PROGRAMMED TO DO ART, WE ARE JUST DOING WHAT OUR ENSL…
ytc_UgxuuKJGv…
G
Our civilization will collapse in the ensuing civil wars and mass starvation tha…
ytr_UgyQVJxX-…
G
When the Homestead Act was passed most of the great Plains was known as "The Gre…
rdc_d2y12bq
G
Let students use chat GPT .. then at an exam ask them to write up essay answers …
ytc_UgzeXZx7F…
G
The main problem with self driving cars isn't even technical or the fact they ar…
ytr_UgznuBOtk…
G
I love how ai is forcing people into socialism. I love it!! Death to capitalism.…
ytc_UgzutxFwX…
G
Microsoft, of all companies, should know how garbage AI's that focus on coding a…
ytc_Ugy5a3tS4…
G
When AI is in full force, there is another thing to wonder about, cars, less out…
ytc_UgyuypTTo…
Comment
In order to prove that this works, they would need to apply this algorithm to a number of people over time and compare their outcomes to their scores. With no intervention. Once the baseline was reached, then there's the question of what sorts of intervention would help. And defining what "helping" is. Avoiding criminal charges seems like a good baseline, and therefore confronting them about pretextual issues is inherently a problem. I can't see this working.
youtube
AI Harm Incident
2025-12-24T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwsc6O6VHYs2APKT2V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRKS6GD6T9XHF9Gcl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxshz6RbLw_bEiBSyZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy5vb_Dn8470h7PoCJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzeuN_7H1U9KVw6bOV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwVhzf1uy70hJajgq94AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxERVM4okgAQzaSh_h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzjsM_ddbMzfDFiPPp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlvOdWMWhy0vV61BN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwgf6vBYQpwtJyM6FF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]