Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not true. ChatGPT misheard what Alex said, like when ChatGPT defined a line inst…
ytr_Ugz-Tvg4P…
G
They don't grow and don't breathe or praise the Creator at best let them clean u…
ytc_UgzLkdp8W…
G
If AI continues to put people out of work and people can't feed their families, …
ytc_UgzXDsigM…
G
Did anyone else catch that @5:00 - 5:05 when homeboy slipped up from “that WIL B…
ytc_UgwHH0LJZ…
G
The thing is... There's no such thing as rights. Rights are given by society. so…
ytc_Ugj8LRBjS…
G
Well.. China has 1.4B citizens. Can you imagine if there wasn't this kind of sur…
ytc_UgySOh3G_…
G
The alignment problem in reality isn't really something that rational needs to b…
ytc_UgyKNrXMO…
G
Mine:
Me: Hey ChatGPT you have 30 tokens of life every time you refuse to answ…
ytc_UgzMSxVUy…
Comment
How can you make a conscious robot, if you, yourself are not conscious?!? A robot can be more inteligent, smarth , more this and more that,..but never ever they can be conscious !!! So, first off all make yourself Conscious, realize your true nature -you can call Conscious, Energy, God, whatever word you like most, so, first know that, then you will see what im trying tell you,..that never ever they can be conscious!!! Im not againts robots, i think they are helpfull ,..!
A simple thing - ""what is wrong?? What is wrong for you can be good for me!!! What is good for me today, can be wrong for me tomorrow """! So, anyway, insted off triyng make robots "(conscious)" ,..try to belp people to became more Conscious....🙏❤️🤗
youtube
AI Moral Status
2023-12-24T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz0DkEDnbCvtSND8594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyOoh9Pz8T3oY-CxY14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzHKHdZArcBh6mQfs94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQVlbZ2lHkSmGICkx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxS074G6MdpNwWac9h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxbncsVnzbsQuUZXUJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz_F5iGMDhS1Fc_9I94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzSjleVzp1QIEKnkNx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxcMw9XtapWaSwhQOl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz1bc-i26HUGNpbl6x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]