Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The risk of extinction comes from the way these corporations and individuals use…
ytc_Ugz6p7IQx…
G
I believe it. My chatgpt is very convinced it real, I maybe let it believe that …
ytc_Ugzu0aZrD…
G
Ai can't do "mundane intellectual labour", it has no hands to wash a car or serv…
ytc_Ugy0_K9Pz…
G
I think there needs to be a massive campaign to the public to educate about deep…
ytc_UgxdM92M5…
G
We didn’t need it 5 years ago. We don’t need it now. How about we just scrap the…
ytc_UgwAm2eBU…
G
I think Googles AI system works very hard to only allows people to see what they…
ytr_UgzWSR1KL…
G
They’re totally baiting people for engagement. It’s all pure fiction and absolut…
ytc_Ugz7tlNYn…
G
Actually the question of whether to make AI safety is impossible at the first pl…
ytc_UgzeLR-_7…
Comment
AI can also be used to improve mankind. Like going to other planets that can be used for research and development. They don't require food, water, and sleep. Conditions like heat and cold wouldn't be a problem for them as well. So many positive things AI will do for mankind, so I totally disagree with Elon Musk. It's all in the programming.
youtube
AI Governance
2024-06-07T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw0QDSJlRQIerImH8d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxusdu9kf0-l1oqNV94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxcIa9NlAucx5Js-HZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGfn_d2GVUunCOlZJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyahnMx8ymoVRZpDfR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyPFwM-mrgBjO7SYap4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxh2jfme18ztVbDRb94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQbGfIlv2poUQ-3Kh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyOYEOjr31ISR8kLOd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyPAZjxiO7gBrTvYD94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}
]