Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think that we will know that AI is conscious when it wants something that we h…
ytc_Ugykd4hfN…
G
I want to see self driving cars in India , that will be like living in a GTA ope…
ytc_UgyGe1Nur…
G
Humanity is acting as a child attracted by fires, told not to reach out and touc…
ytc_UgyWAnlWn…
G
if I agree with the fact that AI doesn't produce Art, it nonetheless produce ill…
ytc_UgzK_tFDY…
G
Actually it's not dilema considered if it is controlled by drivers instead. A.I …
ytc_UgjTiwroB…
G
Clearly partially scripted. Even the modern GPT-3 software language model is not…
ytr_UgzOWkOHq…
G
So, no one's not learning their lesson after watching the movie Terminator,RoboC…
ytc_Ugx11F1pa…
G
the real dystopian vision IMO will be having to put up with ubiquitous US gramma…
ytc_UgylEVs6e…
Comment
So much for the Three Laws of Robotics, eh?
What is the risk of human extinction without AI?
youtube
AI Harm Incident
2025-08-03T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyJBh20xShkhH0ufAR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxP78EWmR4IK0txoUx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyuJIzGPSBQCyKqeqh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwnuWJHDhswXQ5qYWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6Gk95M83SSQbHcAR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx8-z4cK1d-cb32BTN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzNcdf_PAIoQrjpmUx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyIanSDNooH0zu_E2t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzjdeHZsugw8T4bUgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzW4PJD1n9U-RUWM7d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}]