Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
thank you for talking about this! its absolutely disgusting for someone to even …
ytc_UgyjytgHi…
G
What? There's no good science fiction that presents a future with super intellig…
ytc_UgzwbBP8b…
G
They dont feel them... Its an algorithm based on the way certain words are strun…
ytc_UgyXtuMxQ…
G
So basically stage one: the elite no longer need humans to serve them or sell th…
ytc_Ugx4USAPM…
G
I am a PhD student and use chatGPT for certain things, like quickly finding info…
ytc_UgxfqUDLE…
G
I don’t see why this is such a surprise. Ai is based off of human intelligence, …
ytc_Ugxbe99fC…
G
I think most 'art' is just design slop for marginal kitsch aesthetics, but the r…
ytc_Ugw1YxXPX…
G
most of us are not psychopaths because we have social cognition and moral module…
ytc_UgyxUGGW5…
Comment
You guys are asking all the wrong questions. The question should be why are we here, at this stage with AI? What do these technology leaders want? What does it mean to want the betterment of human kind? How do they define that?
Ask them!
We have a right to ask them if they are doing what they are doing.
Questions need to be very personal here because these people are obsessed with AI in an extremely unhealthy manner. They see the consequences of AI adoption as a secondary reaction to their work precisely because they do not factor this topic from other points of view, not in any real meaningful way. They contemplate the consequences of AI the same way a guy in a race car contemplates the scenery he zooms by.
Just like a race car driver, these tech people only focus on the road. They are one with the car that moves over the road, and everything else is a background to them.
youtube
AI Governance
2023-05-21T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwtyk-IxXsE4iSKpjV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxWgspHlWGg8MzjC4V4AaABAg","responsibility":"government","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyCaK0RzjpLlVuCSIx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdmUdtPxcDebWJHZ94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhkRfMhhb744bDTzB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxNgbwUPOOFfdLkQo94AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyQyhvB-PXjy03cGKZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyk0BSkYrLEjFwCI_54AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzQtqJcgzbUf3VRwD54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6F5j4cM5RFklVtZV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]