Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
16:26 is describing the human race. We were all created, meaning we were all e…
ytc_UgxJthGsP…
G
I going to say it’s that same as before. We heard the same about assembly and th…
ytc_Ugy9d0TeU…
G
I’ll be honest…..this is a level above McDonald’s cashier. Set your bars higher …
ytc_UgxVr02bl…
G
In the war for survival, you can safely bet on mother nature. Solar flares, eart…
ytc_UgxKF_ZKr…
G
the gemini ad thing makes no sense bcs if i do a meme i dont want a stupid thin…
ytc_Ugwiuwl3e…
G
Jusnaturalism is the ethical philosofy to rights that make sense "the only one" …
ytc_UgjlwgYTf…
G
I sorta see the AI artist perspective. I just wouldn't call him an artist. He'…
ytc_UgzA4nPHm…
G
I'm pretty sure a scientist tried to copyright the art his AI made though it fel…
ytc_UgzKUUqRV…
Comment
8:28 I've heard about these test and, i've fond them to be a bit leading.
They'll tell it to "do this thing and this thing is the only thing you care about" or something like that. Basically prompting it into acting like a goal maximizer which would obviously be misaligned.
Similar thing with AI figuring out it's being actively being trained and trying to act more aligned. A lot of times they just tell the AI it is being trained which is not necessarelly a good way of knowing how it would work in reallity. How would an AI actually figure out it's being trained without being explicitely told it is?
youtube
AI Governance
2025-08-26T17:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzcCrHZJJxknEBhj6t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgwvyYKaV--lqBPpVsZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwju9H471QELXm_kph4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyTZPkpxjQFi7ST6h94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwN1M2YFT83yGDmXWd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]