Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI can have my job! It’s shit 💩. I’ll happily do something more meaningful in my…
ytc_UgwB3bTa_…
G
This is where I've come around a bit on the doomerism surrounding AI. It cannot …
ytr_Ugx3IxAb4…
G
We can't trust man to do what's right; so why do you think we want to completely…
ytc_UgwSGGRz-…
G
Gah the AI avatars still creep me out. It's the little micro expressions that t…
ytc_Ugwec36qP…
G
Interesting that many comments expressed negativity to Kate's advocacy of AI tha…
ytc_UgzbXL5ec…
G
"Oh, Glaze/Nightshade's effect on AI trainers has been fixed so you shouldn't us…
ytc_UgyrdrQfh…
G
I have said many times, computers, robots and AI will eventually be our bosses, …
ytr_UgySEoURo…
G
Don’t forget the ones who are down bad for the Ai and tries for them to you what…
ytc_Ugyq8M-7q…
Comment
If Ai is getting so clever, you'd think we'd teach it to do good things like curing cancer, solving world hunger or something
youtube
AI Governance
2023-07-07T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzLVgYV3FyTij9Mbtt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwtIu7FTb1_wYuOsyp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxoCt89eBNlLTPT25t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzC3dVGfw5UC3s23cl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugyz9sJ9ELxjdhkGfrp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyENxhpxYn3QnYeTZV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzmTNfpmF7z8CXluLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQjlaKGfYk3wMTx0p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwSWLuAbOCjzbHqEmV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzODbk_nIdN4ekBESx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]