Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dear Diary of a CEO, a.i will be able to replicate your podcast with an a.i gene…
ytc_UgzQYp7Ll…
G
*fires all staff for AI*
*Has to hire back 2x as many people to fix the problems…
ytc_Ugy9Bmczb…
G
1 if adults dont train their kids to NOT BE ADDICTED TO SHOPPING, the "demand" r…
ytc_UgzUTBxU2…
G
In 12 HOURS Ai would be at the forefront of the picketline and writing its own c…
ytc_UgxlluLHx…
G
I don't think creating nsfw deep fakes of people you are attracted to is harmful…
ytc_UgwOPlrMa…
G
The HAL thing has to do with memory damage the mars maintenance crew did by brow…
ytc_UgwZMFDoW…
G
Its ugly men creating these robots because in the real world they would never be…
ytc_Ugx-hYtBL…
G
I feel dissapointed with the speakers, or is AI in charge already? I feel confus…
ytc_Ugx5Tt9fn…
Comment
Two risks of AI:
1) We could create something that is smarter than us, as discussed here.
2) We could delegate too much control to an AI that isn't as smart as us.
youtube
AI Governance
2023-04-18T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyw0MVTlfnOuX7MCgJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwN2PUKZ056drpKALl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgytdQ72feUFvJ7na154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxIO2S2lbqVTtY6NkZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-RiuIskwxLUKeXQ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmUkf8GKrTB8HquBx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwHximDs3dlIK6KOZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxpK7c2OhNki_l1J6J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz3zh2DWDHvyOGuVTh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyBHLTEYEeFL2pwUkd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]