Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
IMO, we DON'T need UBI. Ubi is bad because that money could be spent on... well.…
ytc_UgxLwLi8M…
G
Why would they create a horrible robot
To destroy and take over the world
Soph…
ytc_UgweSTkzs…
G
***** Sure we could stop it if we wanted, we would just have to collectively wan…
ytr_UgjXoPJGX…
G
Zamn~
Did we really just get to the point where we are taking inspiration from a…
ytc_Ugza6cTHa…
G
actually things like robotics means that one surgeon can use a computer and robo…
ytr_UghR7seuF…
G
Im glad i use physical media for all my art. Ai cant steal my acrylics…
ytc_UgwFU5NWi…
G
Just started. How long until the black women talk about AI being racist I wonder…
ytc_UgyZsz-Yd…
G
If people just used autopilot as assistance, not as an automatic driving turn of…
ytc_Ugws_O5Tm…
Comment
AI should be in a box , it has it’s purpose and can not exceed that purpose , allowing AI to grow exponentially is very dangerous , if you think a satan is bad , an AI without limits would be far worse .
youtube
AI Governance
2023-04-18T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyw0MVTlfnOuX7MCgJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwN2PUKZ056drpKALl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgytdQ72feUFvJ7na154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxIO2S2lbqVTtY6NkZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-RiuIskwxLUKeXQ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmUkf8GKrTB8HquBx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwHximDs3dlIK6KOZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxpK7c2OhNki_l1J6J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz3zh2DWDHvyOGuVTh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyBHLTEYEeFL2pwUkd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]