Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't worry, when AI take your job you basically are handicapped and you will re…
ytc_UgxjF4-gK…
G
The big point is: I do not get both and we could not read what you wrote as inst…
ytc_UgxpYH2HF…
G
There is one thing a lot of people probably missed ..i get it they might not bel…
ytc_UgwHMbpUn…
G
It’s not even close, man. It’s BS. And honestly, I think the AI will come to a c…
rdc_m2a3q2q
G
I use Gemini recently and it said today's Friday when it was really a Monday. Ha…
ytc_Ugw3NfKme…
G
@josehumdinger6872 AI is not "studying" anything. It's not actually intelligent,…
ytr_UgxMEllv5…
G
idk why you're being downvoted. Current AI-oriented GPUs like the A100 still was…
rdc_jgoan6c
G
Yes, exactly. When the routine work is automated and people are producing ideas,…
ytr_UgzDLMmKN…
Comment
I don't think we know exactly how the human mind works. Thus i'm not sure if we can really achieve to build the perfect Artificial Super Intelligence any time soon. Actually i'm not even sure that we're aware what we are playing with here. Yes the AI we design can work extremely fast, it can use vast amount of data, it can distinguish one object from another using sensors, can build patterns among big data etc. In short, it may be infinitely intelligent but does this mean that it can have consciousness or self-awareness? Do we know the math for consciousness?
youtube
AI Moral Status
2026-03-03T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyjmPtExQkJf4UQEXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyRec8FIFj27VDCk0F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyAjlHl4gpB5LeIOYZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzswSsAckqMsYh8BBh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgyBEWZI9N2xGBGSnjZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzarnlDb81r-NKGfJZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxFioK4nnHwKj9u8O94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyT7us7jsa0m3t8mqd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugz2I9H439Z0KTOkrgR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzAhZb6kkwA-GpcNB54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}]