Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
TRUTH. Not to mention that the way you talk to others (including your AI) mould…
ytc_Ugxr-6AKQ…
G
lots of AI workers.. are FREE, meaning no wages and no taxes.. a companies who o…
ytc_Ugy1o-LjN…
G
A machine can simulate to have emotions but it won't because it doesn't have the…
ytc_Ugx0aliE3…
G
@Phoenix3FighterThere is no lies or fallacies. AI is all that. It’s just typing…
ytr_Ugw7Vo9CR…
G
Yeah… very cute…. especially with an “automatic weapon adapter”, and when remind…
ytr_UgyJDaSrs…
G
It ocurs to me that if there were a AI capable of paying políticians it probably…
ytc_UgzRlBa8x…
G
AI is not replacing jobs and it is financially unsustainable unless it actually …
ytc_Ugx-YZDTO…
G
I was kinda hoping this year was 2 years old, then I turn on the news to hear an…
ytc_Ugy663dyo…
Comment
Well, I don't understand why every single source talking about the threat from AI is mentioning it developing sentience and turning on us as some thing to be concerned about(when in reality it's absolute bullshit) when there are other underrated but greater threats from it like the manipulating and spreading misinformation or stealing people's data. Also, before we develop sentient machines we need to better understand our own inner consciousness which is a far more important objective for us as a species. Also it'll provide better idea to build a sentient AI. One analogy to sum it up. Thinking of building sentient AI before understanding consciousness is like trying to go to the moon without having any idea on Astronomy.
youtube
AI Moral Status
2023-08-21T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugw9u4eyvrqOT4sgZQp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxiXDKDKbnlHsZ-iF94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyv9xWt6CgZLRPI5Mp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyEHKOELB_n2rsdR694AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzmKWe4RHf3UpSbBJV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7hXRZf0Pz0H68rCR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugws97o1bOtnRxPPR9N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwP_kKxQew89OjLTpl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzw2UeKiZELfv5RbkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-w5gPVzf7LjLO2gt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"})