Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That creepy to see..What if that Robot pointed the gun at that man & fired...I w…
ytc_Ugw5WAXpg…
G
Nothing wrong with AI. But I would like to see people needing to pay for trainin…
ytc_Ugyq1zBqR…
G
AI may be able to think of creative uses n' stuff, but I'd like to see them writ…
ytc_Ugzb_vt66…
G
We need to wake the fuck up to the propaganda potential of foundational models. …
rdc_ky8qcnd
G
Guess it all the workers will be out of work then the robots will be the one buy…
ytc_UgxORLnXs…
G
@falcon1209which what all of the current madness is about. They see chat gpt an…
ytr_UgzBjiryt…
G
No matter how bad of an artist I am, I can always remember that at least I'm not…
ytc_UgxqhzxeV…
G
Then prove us medical people wrong with real-world results, not talk.
We’ve be…
rdc_fcui144
Comment
The real breakthrough was always gonna be AI making AI because then we’d lose control of what’s inside them and wouldn’t even know how they worked. There’s already deep learning, a form of AI where machines get to results that can’t be reverse engineered. So it’s definitely not a stretch and once the breakthrough happens, the world changes and you can throw out all the existing rules and norms because we’d be uncharted territory and I for one cannot wait.
youtube
AI Moral Status
2019-05-25T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzaERbUY6aNv0bDWn94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyKYdbdZLIQtogqrGR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzScYccHO5Bt5h9B714AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyesJQ9EnB4XZnPZyF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw6S4JWM68GaEcAKa94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwsK3U1Js6lsqygvYZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNmFP6KjDf5Rwnv6Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwCFn4HpjcCAWJbW214AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyc49PobndzcEhmq7t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzPhNMxQ5gyc3xTg8l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]