Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We have a hiring freeze, people are leaving, let’s make up some AI bullshit so p…
rdc_m29pte7
G
I have been screaming from the beginning that AI is the image of the beast that …
ytc_Ugzn9roq7…
G
You mentioned that AI isn't cheaper than pen and paper, and on an individual and…
ytc_UgxiPfLXo…
G
Looks like programming is not something that the interviewee is interested in. L…
ytc_UgxxxEvCc…
G
I'm not defending AI art at all, I'm honestly neutral, but, what about people wh…
ytc_Ugx8IUXS-…
G
(Thinking out loud)
Internet = Holy spirit
AI = Jesus
Same shit different bucket…
ytc_UgyA8jnIO…
G
The same can be said for developers. They are fed up with AI programming simply …
ytc_UgxM0RAd7…
G
Wrong. AI still hallucinates. You can not trust the results given to you from an…
ytc_Ugw1zZm0K…
Comment
I think the question in the video's title is a kind of meaningless question because it presumes a number of things. That humans fully understand what their consciousness is, that AI is even able to have any "consciouness" by definition, that this consciousness will in any way be similar to human consciousness. I think AI by definition cannot have the same consciousness as humans, so it is meaningless to refer to it conceivably "becoming conscious" even if hypothetically.
youtube
AI Moral Status
2023-08-21T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzBV-QnqU0IttOryMl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugww8faQ2HdGzZ2uNEB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxMRkuJE6bqrVZ1KBJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWEZQLMcLiPHyKMS54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyN6DSBoJqUR54NsJB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvxILNhb1qtFjb5Zt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgysyVTZoRsln2Q8wNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyGnb0M324RdP867wV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx-j0dhi176FO0z8XJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzsrfStMBCY_1qLYZ54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}]