Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Josh Hawley wants all the power of AI so that he can become a very harsh, cruel,…
ytr_UgyB01O45…
G
Hi! could you do a follow up on this and also include more AI services?…
ytc_Ugzjm22H1…
G
Making a dumb persons ideas work without taking the Earth and consequences into …
ytc_Ugx807Zq8…
G
Coders that are let go, can be hired to introduce AI use to those who cannot lea…
ytc_UgzYNPeMW…
G
I know I’m commenting a lot lol
Last one! (maybe)
I LOVE THE ART YOU MADE IN TH…
ytc_UgyKxC95e…
G
I'm sorry, but could you explain what part of your art is poisoned towards AI mo…
ytc_UgyXIqwCC…
G
So you telling me that the ai is wrong from predicting that someone is 99.9% pro…
ytc_UgxLjsvPY…
G
Most people have no idea how good these systems are getting. V13 is showing just…
ytr_UgzEWXTA6…
Comment
It’s disappointing to see such a sensational representation of AI on this channel. ChatGPT is not capable of “abstract thought” and all of the hypotheticals in this video are extrapolating fundamental technology we don’t even have yet. Usually The Why Files does a good job of representing the counterpoint, but in this case the other half of AI scientists don’t even get a look in.
It is very important that we understand the real and immediate risks of AI, such as biases and degradation through “data cannibalism”. When the conversation goes immediately to Skynet, it shows a gross misunderstanding of the technology and is nothing more than fear-mongering.
I love the channel which is why it’s such a letdown to see a video that really could have so much better. Please keep up the great work, but when the topic is this contemporary and important, it deserves better research than this reflects.
youtube
AI Governance
2023-07-07T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz5HhoW-j17hkagyn54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwv2aknLeSF2ZVYldt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxDF-3WadAu_hPBzVB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwbJe0LJowROwwEC1d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw8EAQBUSB3hcXXz5h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxSxfGXgxEgeEYh3_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwHk6fX_B3cIt2Avx54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzgJcV0rGbAI1p_5iJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxq7RlsVRzs9QpDXX54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwromi5kAlKUESQVcZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]