Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Once AI and robotics reach a certain point the elites will release a very conven…
ytc_UgxAgcMVr…
G
The reason we value art is because it takes an individuals time, effort and self…
ytc_Ugwcq6E3r…
G
If you want to see a horror story you can gladly look at my c. Ai chats💀…
ytc_UgyldxAdr…
G
Oh my gosh, this is just one reason I never want to get in a driverless car!
Wa…
ytc_UgzpvWHWw…
G
I'm fine with the introduction of AI to life and them doing our jobs and working…
ytc_UgwqA6jr9…
G
I think most of our fears over AI have already come true, they just haven’t made…
ytc_UgydIv8Mb…
G
AI can be abused just like many other things. We can’t blame it. I blame our hum…
ytc_Ugx_AC-4T…
G
Omgosh, I freaking love this video. I did a research paper for college on AI's l…
ytc_UgytD5mNg…
Comment
Technology has been improving exponentially for thousands of years. It always seems faster when your are in the middle of it but humans have always managed to cope. This time may be more challenging than ever for us to deal with it.
There is no question that AI will eliminate billions of jobs world-wide and entire industries. Society is not ready for that sort of massive unemployment. We need to determine how our economy will function when we don't trade time for money and what happens when nations do not adopt AI at the same pace and we end up with a have,and have not, world. Is conflict inevitable? And if we don't need 8 billion people - do we depopulate to a size that is sustainable or do we reach for the stars?
I'm 68 and probably won't see the biggest changes coming but my kids will. Sadly, I don't think we are ready for it.
youtube
AI Harm Incident
2025-10-21T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxtYYqknib4HvJfPQN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw67DJaD40ot2451mR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgymahFMg2SwBwZjw0B4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy3fbjskHI42qdoA0x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgztHflMnNwpxy6TYX54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxIuSpEBYP2GkpFK6F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGwb9yrVcTOEUTbvt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBvMC-cUhk6haV6vF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy2Ky9EmBCkXI5Q6G14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwifJrNyWs9pQEOG214AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}
]