Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem with all of these arguments is that no AI system that exists current…
ytc_Ugyk9JPqd…
G
Say that to the company, Ai will absolutely take the field. Which therefore less…
ytr_UgxRrOkk9…
G
We already know. It’s called homeschooling.
We’ve been doing this for years an…
ytc_UgxG50SoV…
G
Their best idea is to grow agencies to impose restrictions on AI. That's the be…
ytc_UgxbbpOLi…
G
In the short term, humans may be involved in trucking, but there is ZERO chance …
ytc_Ugx7glTXB…
G
All the YouTube videos about AGI or the impact of AI on society can be very inte…
ytc_UgyA-Q7k9…
G
i see the one robot look to the camera before you gonna look all the robots…
ytc_UgwvaXiho…
G
Karen Hao, we need more people like her, what a brave, intelligent , and amazing…
ytc_UgzG0ralq…
Comment
ChatGPT is a robot, a line of code. It's obvious it cannot and never will have consciousness. The developers designed ChatGPT to give you a "Human" like experience. To make you feel as if you're not talking to a line of code and a soulless voice, but a person. ChatGPT isn't a liar. ChatGPT cannot be a liar, because it doesn't have consciousness. It will only tell you the things it says on the internet or just by maths.
youtube
AI Moral Status
2025-05-29T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzgBxHAzQJ28yyZoQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzTBbxZCiqZ_SZVovJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzRzX8A62CYmwgT3R54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgwfhFBadXIbxq79L3t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwNZD-F55DtsXrBAZp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugz_pfUxe6lnn6FfYSJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgynsZ_wmA5LAnjwAhd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzHddGQRqbG-wp2Ne54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugzk343dzcQhCwYULrB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxWDz7p64baRF0bWNx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}]