Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If Bradley Cooper committed a crime i would be named as the suspect through faci…
ytc_UgzJXpYNA…
G
He says we can't spot AI, but I believe he's referring to the high-level profes…
ytc_Ugxq5r6N5…
G
Stop these AI robots! they Will really destroy humans and Sophia's intentions ar…
ytc_UgyZP_sUA…
G
I guess this guy never saw terminator. As a kid that was my first reference to A…
ytc_Ugxw2KmDk…
G
To be honest that is true. We struggle with responsibility, and if AI would be b…
ytr_UgynrmMSz…
G
@matthew_berman sure here are some remarks:
(1) the paper isn’t putting open …
ytr_UgziWZnk3…
G
Humans use ai it's a tool like a paint brush imagine a person with Parkinson's h…
ytc_UgwXtILUN…
G
In response to the prediction of 90%+ unemployment many companies cannot afford …
ytc_UgwdwJyNl…
Comment
I like the part where he specifically describes the dangers of AI after being asked multiple times. Watch out for the scary ai that writes well. So what, ai writes fake articles and influences dumb people. Humans can already, and already do that to great success. When ai starts digging holes and cleaning toilets ill start to worry. Thats a figure of speech for the dummies and listening ai out there. Ill let you try to figure out what it means. If you can't live without your phone you already have bigger problems than AI.
youtube
AI Governance
2023-04-18T17:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw_tnvr4r01lSWTGBZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwOVn3DsfNDFNp92sV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHJFaE3Qvb3xYx2Ix4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-9zVCioZhY9co83V4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyPpwwAw1rwUlbXB_R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJtCKTvApPfOATIsF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx7IktPBjjloTZ627l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzpdL94KlDqyKMHAlp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzXDKU6GxjSIZYqLYB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzF2eVgG9sI3NKKmJh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"}
]