Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Where's the rizzler
The one that says
You look good and then the ai falls for …
ytc_UgzbcMwQa…
G
Me: Well, what did you do to try to learn how to make the most out of the large …
rdc_mowmtqw
G
This truly is the parallel. Among the lessons we should have learned by now . .…
rdc_cti7g1z
G
I am an artist myself, and it's not like I have no sympathy for people of my kin…
ytc_UgyG79Fj7…
G
I'm all for development and progress but I feel like we need to separate militar…
ytc_UggQA9piQ…
G
All of these options make me feel like I am watching humanity about to leap to i…
rdc_je3qcba
G
Don’t have to build a spy system, merchants are gathering information on you to …
ytc_UgzbaZgAW…
G
I mean the writing sucks now a days anyway so if I was Hollywood yea I would be …
ytc_UgzXJPPpF…
Comment
When he talks about two main risks( people misusing AI, and AI on its own becoming malevolent), he misses a third. The third risk is that people will become too dependent on AI and won’t learn to think or do for themselves. The third risk is we will devolve as a race into something that can’t survive without AI.
youtube
AI Governance
2025-06-16T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyrYsW595L2-_DewEp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3pSftKWBS-32uk8t4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx7-GrZRJnqy1URviF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyTp6iSIWtvOHk7rex4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgykLi8xICfWHVZK7q54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyUAy6tqRGL1IW_YD14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkBfq2w3k8ikofV0J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzcJ5SCXtQChZFoZhZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6297PxIITLI0LC914AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwl7_AoIffXxSKy9Oh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]