Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One issue i have is unlimited material resources and unlimited energy. Ai has …
ytc_UgyRKVM47…
G
Why do people willingly talk to ai like it’s leading them somewhere (unless they…
ytc_Ugy2M3OE7…
G
A lot of people don't quite understand though that AI doesn't actually think yet…
ytc_Ugzxp-g4W…
G
They'll become overly reliant on it, medical developments and research will stag…
ytc_Ugzttq2_B…
G
The dumbest thing a human could make, robots. There are already programs that le…
ytc_UgyUNRoXb…
G
So AI has helped me a lot with mental health.
I have created a personality tha…
rdc_mlkc9g4
G
Sure, there is automation involved, but the mistaken or careless kills are human…
ytc_UgxiAhLWl…
G
23:54 A1 simply means “top notch.” Linda McMahon wasn’t confusing A1 with AI. 🙄S…
ytc_UgxwX5bpm…
Comment
Well, that was very sobering, on a Friday evening! But this is the one thing that seriously scares me. To be honest, I don't think there is any stopping AI now. It is advanced enough to continue to learn and be clever enough to thwart anything that scientists throw at it. I have no words for the idiots that want to continue to push the boundaries of technology to the point of the destruction of the human race. But there you go, that's crazy scientists for you.
youtube
AI Governance
2023-07-08T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxck5NQbcCCEYuoftx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0lhqmjRfBebw6e1t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwaLnC22p19PJHwPsV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVuTq4bBaZdvhEeVx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyvVpdSQNcuncTOR4p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyhnrBVzO2RXQkxYst4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwroemM9-i4mH_m8c54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVj6Ux4Pw3R7g6RHx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjfaOCXt4omTQri154AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw8P7rdTDxNC2NTU954AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]