Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think the alignment problem is fundamentally unsolvable. It's basically an ext…
ytc_Ugy1ylKx1…
G
Chatgpt has been doing so for the past year so likely its similar to their syste…
ytr_Ugzb3UHvH…
G
Indian Gov Is Creating An AI Model and Every Soul from India Knows How Much It's…
ytc_Ugzr8j7f4…
G
AI? Someone should invent headbands for politicians which can detect when they a…
ytc_UgyCrTQOw…
G
My guess is Ai upscaling which is whats causing the distortion/artifacting becau…
ytc_UgwfvRtMZ…
G
I've been on a Waymo before and we were coming up to an actual active crime scen…
ytc_UgxhGPHNp…
G
Great points, another thing to think about is ai could not produce what it does …
ytc_Ugx5OFJyD…
G
Autopilot is not the same as Teslas FSD autonomy. Tesla is mostly focusing on th…
ytc_Ugx2GJi7G…
Comment
This form of breathless doom laden "the sky's falling... eeks!!" isn't very mature in a discussion of the possibilities and dangers of A I moving too fast. I think we should have predicted some negative vectors in the Internet back in the 1990s. We are likely also to act too slowly in regulating the tech of A I. Super smart "independent of human input" AI isn't besieging us yet. There's a lead time to spare if we use it sanely and calmly to ensure AI won't engage in inhumane or negative behaviour as in sci fi novels. Only if real human control with moral intentions is present to continually observe robots, machines that think.
youtube
AI Governance
2025-11-28T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyB4eHmJcwIHfj54V14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwmo3OuVAMyxO1N_r14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz2EsxpoXvG55B9YlR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwsWYEKIpyyVfqp0fd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgywrwCsV7brjr-CYYV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugy34CXJNNg7V8FzM2B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz7O0B8NSW93_ZQaP54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyCK6P3uQw_4r_pDfZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx1_Noh3obDQJmXSfl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXCVuAtwKTFVbIS5V4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]