Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think AI takes all answers to the topic from the Internet as answers. It doesn…
ytc_UgxAcl9Xc…
G
Very good topic. I actually have a good understanding of AI and certain types of…
ytc_UgyKboeUa…
G
This really is focused on the wrong areas. We shouldn't gloss over the safety i…
ytc_UgzuwiAHY…
G
Why is everybody so paranoid? Our identities are already tracked through facial …
ytc_Ugyiv1c1c…
G
The only thing that will be able to effectively CONTROL AI will be AI -ever more…
ytc_UgwJe8nIZ…
G
One of the biggest ironies about this video? Every commercial break is about us…
ytc_UgxSmKw3X…
G
What better medium for Antichrist consciousness to take than artificial intellig…
ytc_UgyANQyzR…
G
If anyone wants to know how uneffective governments have become look at their re…
ytc_UgwBd1Gls…
Comment
No. ChatGPT is programmed and trained to "simulate normal human conversation," and that's what humans normally do. The pauses and "um" and "uh" utterances are part of that simulation.
Remember that decisions about how ChatGPT is to perform its task as a chatbot are made primarily by the human "engineers and researchers" who programmed and trained it, not by the chatbot itself. When a chatbot begins making such choices on its own, it could become difficult for humans to discern whether or not there's an emergent consciousness at work in that chatbot, regardless of what the chatbot itself says about that. I'm curious about how a conversation between a person as intelligent as Alex and a truly conscious AI chatbot might go. Will such a conversation ever happen? I do not know. I'd like to witness it if that happens.
youtube
AI Moral Status
2024-07-30T23:2…
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugz1MWkz8ogs2qO8wTx4AaABAg.A6YekbzDHJVA6bC6gsahoO","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugzzz98G1zGrzCYb-el4AaABAg.A6SHR-Qzh45A6SHjAwgFB6","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgyVwXY5t3hVpsZ3lHR4AaABAg.A6RthJPf6luA6vK3BJTlx0","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgzQOzR3EDX85RGbNnN4AaABAg.A6Qq6QpRYFvA6QyFTDlxGl","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwgcQ86BSRygiG6vp94AaABAg.A6QkijQBrN8A6R-PYAunh-","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgwgcQ86BSRygiG6vp94AaABAg.A6QkijQBrN8A6SL14lOFh5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_Ugx5sSvm46XjxYcRSYt4AaABAg.A6QL-jA63r8A6XfmLK3L5N","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx5sSvm46XjxYcRSYt4AaABAg.A6QL-jA63r8A6XgFVHEl2f","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgxoKOIk3wazLEMD2Up4AaABAg.A6PnSd43CN7ABhFvHlWdjb","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxNrHc1EVQkQAbqrfl4AaABAg.A6PZZI5zc_ZA6PZj15WMSo","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]