Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why do people think AI is going to be held back until "it's as good as human wri…
rdc_jipmkh4
G
"Do you forsee a war with human?" AI is about to find out what is the only thing…
ytc_UgwJaCUAi…
G
I LIKE BILL FOR WHAT HE HAS DONE FOR TECHNOLOGY HOWEVER I BEG TO DIFF ON HIS OPI…
ytc_UgwEXrOPA…
G
this framing of counterfeit intimacy is sharp but i think it misses something im…
rdc_ohynggv
G
The article *does* note that the AI would often use *tactical* nuclear weapons, …
rdc_o7cie6h
G
When they make those mistakes it makes me think they are being deceptive because…
ytc_UgxVvU3se…
G
Consciousness is what makes the difference. We want the programmers conscious i…
ytc_UgyIpY-WI…
G
ChatGPT is the orchestra that knows how to play lots of instruments
I’m the con…
ytc_UgxE83-no…
Comment
We wouldnt know what a future AI would do. Something that is smarter than the human race is hard to comprehend, so how would we know what it would do? If an AI has triple the intelligence of the smartest human ever, it could come up with a solution to save humanity or fix us without causing extinction. AI is something to worry about and do safely but we wont be as a smart as it and the future AGI could do anything. I hope we go down the benevolent route, where several AIs merge into one and lead humanity into the future by getting rid of greed, corruption and just human emotions that slow us down like Pride. I hope AI saves humanity from ourselves.
youtube
AI Harm Incident
2025-08-01T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzf432KKSQbBpV7xkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwhUBQWqK8utjRkQ0Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxgmc8eo4rhlL536f54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyrjiPRiarJADEjejF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxXMRqfM1yGKS4NYz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwGOmtEcIo4rWH3BSR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydIv8MbPK_ME-VjV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzz3BfKjMzxEaHT47x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxVH_HSoPWlemuDFi14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwmzO-hHpe2Dvi6vWB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]