Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the stammering in the AI voice is reminiscent of Max Headroom from the 1980's.…
ytc_UgycObNM2…
G
@ it’s not a funny scenario that’s just it, when someone like yourself finds thi…
ytr_UgwMws16d…
G
AI needs a huge amount of energy, is very sophisticated technologically with chi…
ytc_UgyBvba6d…
G
In other words, it’s not what everyone assumes it is - using AI to write books.…
rdc_lz7mqxd
G
AI is a baby right now and the writing is on the wall. I worry for my children’s…
ytc_UgwvewVir…
G
good talk but all this is unworkable. 1. general public can't even catch obvious…
ytc_UgwrHbQ-m…
G
Tucker has Elon and and talks about the dangers to mankind courtesy of AI
Joy R…
ytc_UgyUb2piI…
G
Who takes liability if things go wrong?
Who will fix anything if the Ai can't? …
ytc_Ugx1ytHQd…
Comment
I heard the AI say 817 miles at 1:45 for the rocket question…. Any one else?
youtube
AI Governance
2024-01-16T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzsPy2LdTn3-Bem5SF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwLgUt6d4gItaAoE8R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzSTB8vMh8Nos50rRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyh8pVnnL-bvPSd1GV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyF-hIDUDFbpd8Q9l54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybBfeREi4bk2O9f-l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyeVz3yiFwhD3fDQUp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzORCPgYXoIYXFi3U94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzk1dr7sw-UcwY2asF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxG5oOiLGJJ8Q2USAh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]