Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is stealing people’s art so artists are getting there art stolen and used by …
ytr_UgwKkTbxC…
G
If we are foolish enough to give an artificial intelligence consciousness then w…
ytc_Ugzo3L3CC…
G
The reason the 40k stuff was taken down was the author not marking most of it as…
rdc_k9hqr3t
G
AI offers us an opportunity to observe human nature from another perspective. An…
ytc_UgyRc1yMl…
G
Now when you say manned, I have to wonder if of all the successful rocket launch…
rdc_cjouys3
G
richest guy in the world cheaps out on the core technology of self driving, stil…
ytc_UgyOrgp6D…
G
If future firms are driven by AI, and different AIs will converge to be quite si…
ytc_Ugzd91-FY…
G
tested on Grok and you are correct - it didnt identify epidural, yet Gemini said…
ytc_UgzLLRZpF…
Comment
reading the comments here I think Kurzgesagt skipped an important video explaining the difference between an AI and a program (though they scratch the surface with siri and all).
People really think we just program softwares that will feel pain or happyness, letting us decide what should make the software happy or sad.. that's not what an AI is, or will probably be in the future at least.
We're not gonna tell that AI how it should feel under each circumstance, it will learn on it's own what it feels to "be", wether it's through physical stimuli in a robot form or interaction in text or vocal etc.
that's what that video is all about, the future AIs that will come to be, not the fucking softwares inside you iphone or fridge
And if someday we get such a smart AI we won't really have a debate if they deserve rights or not, they'll take it on their own, it's not animals we can just kill or slap, a powerful AI will be better at breaking and building securities that we ever will.
youtube
AI Moral Status
2017-02-24T17:3…
♥ 28
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugi0hj0S4tOJK3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ughn2l5l5nUY93gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghjyLhFY0N9d3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugj2Jo_uYDf2v3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjWcRsFfwSE13gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UggSkZsWg39NxXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugj0QLN4cIFMF3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggPezFG5S3VS3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj22OTCNxaAhHgCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugg7RpJojOWA93gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]