Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
people who say ALL AI art has zero effort behind it skiped half the video
(jk bu…
ytc_UgzjrD_wG…
G
Hello, while I am not an expert on the topic, I understand that the general cons…
ytr_Ugz2dGMF4…
G
I really don't understand how you can give a pretty good explanation of what a n…
ytc_UgxCJBabi…
G
Satan wants to replace God. What better way than to create life artificially, wh…
ytc_UgzQI1b5_…
G
Yep. Our time clock has a facial recognition feature and it works less than half…
rdc_jv6putt
G
Blaming ChatGPT for how people use it is like blaming a knife for a crime. A kni…
ytc_Ugy3ovKJb…
G
@tivolee1666 AI will bite all of us in the but; how will Amazon still make money…
ytr_UgzvIlxvp…
G
ai bros say that it help people with disabilities
okay
whats your excuse than
if…
ytc_Ugx5Zi4ws…
Comment
Observation: at timestamp 15:58 Alex told the A.I. to ONLY answer using the words, “yes or no.” However at timestamp 16:11 the A.I. didn’t answer using “yes or no.” Earlier in the discussion Alex asked for the A.I. to only answer yes or no and the A.I. strictly answered as requested (or accordingly) UNTIL Alex “allowed” the A.I. to elaborate its response. So why the deviation in the A.I.’s manner of response? (Alex either didn’t notice or chose to ignore it). Regardless, this deviation is exactly why I have so many concerns about A.I. Something “caused” it to deviate from a direct request to answer only using “yes or no.” I won’t speculate on why it did that. I only want to bring attention to the fact.
youtube
AI Moral Status
2024-08-18T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugz_SStYVlymmkEEbzR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy4n9hf8Fp4Xqnk9l94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwGVqv-mNCIOJ8JFAd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxm6tJcGzFz8-idqTl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxT2NVdPVfexU9sb8p4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgylTOe5exRUVS3eUSZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxsXv-lm4OQ9tS36lt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxOsF90rQTMJfDFo2R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6UVqN8PPJ4xcijYJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwowH5KsMnmzeJnknZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"fear"})