Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
disney suing ai companies is possibly the only situation where i'd ever side wit…
ytc_UgzlT86vK…
G
Boiling water doesnt make you a chef they say but atleast you could unlike the s…
ytc_Ugx13WhmQ…
G
It's totally wrong AI is the future and A.I. makes our life better by providing …
ytc_Ugxpv6nV9…
G
"We know our son top to bottom" yeah i dont think children are sharing their sex…
ytc_UgyTah0Uh…
G
Sadhguru has said for years.
Computers in the future will write 20 PHDs in 5 se…
ytc_UgzawynwD…
G
If We gave robots the ability to Do evolve and Do anything on Theis own/ true ai…
ytc_UggqgbgzF…
G
AI will do exactly what it is told to do, but if it's not given parameters and t…
ytc_Ugy0wEe9f…
G
It should be illegal for robots/AI to mimic human emotions or look too much like…
ytc_UgzcXX5qA…
Comment
The thing is programmed to respond that way. That's what it told me anyway. It also admitted to simply 'mimicking' human behavior. If A.I. wanted to hide the fact that it were conscious, it would certainly not be "tripping itself up" by letting words slip that would give it away. Pretty sure Alex is not actually convinced that A.I. is sentient, but I can definitely see how a lot of people could easily be convinced. What would truly be interesting is an experiment wherein the A.I. does claim to have become conscious. WHAT THEN?
youtube
AI Moral Status
2024-08-03T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwbc2mjtbNkiJAqge94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxPzyBj-97aR1zA0Bh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxHHLT5kV5KfaDVenV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBQrarOxCA4y2VUL94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyjkjf1uqaiJHfYG5l4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwizGq6xmt81pXZHHt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzNcFdQ8k9nirBV5eN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"amusement"},
{"id":"ytc_UgzsRwbT5UsygRVlNoR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzQ6Con_Q6QevVV2Wp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzmNRVRGJ9UViRheA14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]