Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI art is making a lot of problems for real artists and a lot of people fear tha…
ytc_UgwXdAXqP…
G
Yes, definitely. I leave this comment here for the future robot overlords to hav…
ytc_UgycAfN3P…
G
When human makes idol and fear it like God then human can very well build AI tec…
ytc_Ugwm16gDJ…
G
Really no lol what ru getting at?? IAM saying AI is getting a little real and yo…
ytr_UgyTWVCYN…
G
I saw the title of the show and I immediately thought of robot Rock by daft punk…
ytc_Ugx67u6m0…
G
once there are no artists to steal art from, AI will consume itself like the our…
ytc_UgzDOOObv…
G
"Chicago police department predictive policing AI tool"
The what now? I guess we…
ytc_UgxJFppOh…
G
Honestly didn't expect much, but clever AI humanizer makes AI text sound like a …
ytc_UgxZb2HRU…
Comment
I don't think many people realise that we are still in the first birth of the AI disruption. Sure, we can use current AI models to help us learn, create, do, etc. But remember Napster, and then Facebook, and YouTube? We had no idea what impact the digital revolution would really have in its rebirth (I.e. what it REALLY did that was 'new'? And how did our guard-rails work with that? 😂 ). AGI and ASI are not the same as the industrial revolution. This is much closer to the end goal of our tool creation. This disruption is about replacing humans and their culture - not just augmenting such. ASI and robotics will do what we can do, but better. What will humans do that has worth? If we are replaced, why will we need to learn? Yuval Noah Harari says we will take drugs and play video games. Is that dyatopian, or is it the nirvana that we have been heading towards for millenia? I'm not sure...
youtube
2023-05-15T13:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw4kbFgssihkDTulSB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyiWG6maJ1Sznc-ZUJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxuRXbhVHcSsP3tkXZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5j1CQcvpVQLkz2sx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwHT4Z_mbaS1YaJTZl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxh0_LIOP00AUXtSeh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6Xu1TcN3ci2mHR6x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyBbmqDOWnQZLQbupp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOQ_nHU4Y73_H5g3Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdvQCl90ta411NzMZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]