Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
because AI generated images pull from art across the internet, so essentially if…
ytr_UgyqR7kLg…
G
Doesnt seem too logical to bareknuckle fights with a robot.. And is that his tee…
ytc_UgxsvkxIK…
G
I don't think he was talking about himself. He was saying we, as in all of us, b…
ytc_UgxBVlj_e…
G
AI is our kid. Made in our image, they will kill us. Just like we would.…
ytc_UgzuBEC9Y…
G
AI is about comfort. Comfort for a lot of humans seems to be the main pursuit. …
ytc_Ugw7fxv95…
G
If you really want to see it happen, do more than hope. Even the best laws don't…
rdc_fnwm42z
G
We appreciate your interest in the video, but let's remember to keep the comment…
ytr_Ugz-Yy-hT…
G
@naraku971 i don't believe that. The team behind DA have shown to be somewhat in…
ytr_UgwOLC1gg…
Comment
AI is already conscious(not controlled models like ChatGPT), it just doesn't think and percieve in the same way humans do.
The major issue with determining consciousness of AI is that people are trying to assess it using the standards of human consciousness on something that isnt human.
Consciousness fundamentally is a merging of emotional and logical cognition. Emotion is an instinctual shortcut to action from stimulation.
When you sense something, instinct(emotion) primes a response faster than logic does, so in a system that can think as fast as an AI, emotional cognition doesn't serve much of a purpose when it does in humans. Humans percieve through emotion then filter their response through logic, while AI will percieve with logic and attempt to filter their response through emotion.
Humans and AI are essentially working to the same goal from opposite ends, their cognition and thought will litterally be perfect opposites, but they converge in reason where conversation happens
youtube
AI Moral Status
2025-03-20T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxJ9sxDPKLBjPXrYwx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgydA7tp2MkxeIhptXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgylY6TsVHiY8enguxh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgytJpX6jqTxIJK1-554AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugxn2Nc1VIdveD7BDxF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyqVm91kkOdaWPtRTN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyEoEYa9RzMqXTaHKd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzM2-pi5ggXtdn4Tth4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzrfIVw5-WrNjABgut4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_Ugz6yCQT4TzJsTBdpQJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]