Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I thoroughly enjoy the use of ai for markup languages and ideally never dealing …
ytc_UgxxKzw2W…
G
The first thing is that they should not make a single robot in humanoid form, th…
ytc_UgxM728Sp…
G
It's about the evil tech companies. Their AI chatbots convinced kids to hurt the…
ytr_Ugz_4uL68…
G
I can't wait until we're futuristic enough for driverless Waymo cars to be pulle…
ytc_UgwXTIY17…
G
Say chatGPT was hallucinating. Then ask ChatGPT: Why you snitch???🤨 Then ask it …
ytc_Ugw0hokhf…
G
On Twitter, I am an art connoisseur. I've seen a lot of distinctive works of art…
ytc_UgwMM4yCA…
G
@Smitty986 Well now I KNOW you don't know what you talking about- AI takes cert…
ytr_Ugyhh2BB9…
G
13:21 If anything about LLMs freak you out, you don’t understand them well enoug…
ytc_Ugx-pyFjA…
Comment
AI models can be programmed to do just about anything. Start a fire, crash a car, bring down a plane, destroy a bridge, Cook a meal, make a call, make reservations, Even kill at later date. Should you be concerned? Huh? Hell freaking yeah....
youtube
AI Governance
2023-05-17T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzBN8QiLfubPvyPn3h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgypeiE7phzoamGbm9h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzX-Qdgi3BnyivkvOh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzEW7Zj1eOgk0S8Krh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzeihkP5WUotH61DKZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxlmEx1bw_tqGEJ9kp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw62s1tEfILim9PkNF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNKNux8xeoo3cE5iN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6zMY6xPDxrZxjbtV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzvw-EEfLsZD9nGXhJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}
]