Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its generated because of blur this photo has. I tried not much creating people f…
ytc_Ugy2c5zVr…
G
I've generated a lot of AI images and there is no way they were all based on som…
ytc_UgwV-sNgR…
G
Any politician that votes on an AI bailout will never get my vote ever again.…
ytc_Ugwy55qHF…
G
I have a paid ChatGPT account because it claims to have the ability to draft leg…
ytc_UgwxwXW2P…
G
Disabled artist here!! Yeah no AI """art""" is not "more accessible" for me at t…
ytc_UgxF0mdP-…
G
You’ve hit on the ultimate paradox of "safety" in AI: transparency for the publi…
ytc_Ugwj_dm0s…
G
No this is a humanity problem.. we need fixing before we try to fix something we…
ytc_Ugxkj7yYQ…
G
OK- so no one works. Do we have much money? If not, then who buys these robots o…
ytc_UgyPXyFPN…
Comment
Best hopeful that the existential risks of AI never shows up, realistically. I am of the opinion that the existential risks of AI actually do exist. The pessimistic website is a good example today because none of those real concerns actually happened. Why we must be very concerned to finding a solution to the existential risks of AI is because, if those risks ever come to light, there is an unlikelihood of an opportunity of correcting that risk. Why not think preventive rather than dismissive 😊.
Great debate! Enjoyed every bit of it.
youtube
AI Governance
2023-09-15T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwVDGcwfE1NYW9XH-Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwh8RCr3PCRf9gZv2N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwF-19EE07RA1K7ctJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzWDaIPsZWzKTE5m2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsbOyMmAMrNEnLT-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz7DCi196hY0sr-9sJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwTWARgml9yte_J2n54AaABAg","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxXOLGG4GygC5P_cG94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwNo-P3LDetGzQSm2N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0p9CUmHlcvEPWeyB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]