Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why can’t we just invest in people living full healthy happy lives instead of st…
ytc_UgweM0AeB…
G
Ngl, I always add "good morning", "please" and "thank you" to my ChatGPT request…
ytc_Ugzeyevr-…
G
@Alex-ns6hj a guy from work showed me his AI that can do this actually. Its scar…
ytr_UgwjYeS1F…
G
Now if we could just install an AI government so we don't have to deal with the …
ytc_UgyWyvINp…
G
The whole con is them saying we’ve only seen 5% of what LLMs are capable of as o…
rdc_lp7j19y
G
we have to humanize ai , if we want it to be sentient , its just a better idea t…
ytc_Ugz0ENZc0…
G
I wonder if AI gets so bad they could deepfake and voice fake to make it look li…
ytc_UgxsW7iiT…
G
It's now a common explanation and even AI researchers did use it at one point bu…
rdc_mnxhj0q
Comment
Right Elon. That is exactly why you create an unchecked AI that’s actually lying and spreading misinformation. WOW. Great advice 😂
youtube
AI Governance
2025-10-11T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx5wwS7NlFN4YvwsYN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgziFoRpKCOKaQqP9Xp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwgvN9PY6TBTL39HN54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx7NaMBeWgD36T85754AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz_oyMCs_YGcoqtsqt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzZrWnZ9Ta8dLjUXSF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwvfhiTZk9tf4xLdmB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBZJQfy6W49KvReX94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgymS9kH4Ww_BVVlQz54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-ah084AHi8aE9cIF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]