Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
yeah this kinda thought is why im not really worried about it... ai art is soull…
ytr_UgyhNElvQ…
G
Once AI can properly self improve, all bets are off. No one can predict timescal…
ytc_Ugyaj7n3q…
G
Stephen Hawkins said that AI was the most serious threat to the existence of hum…
ytc_UgxQ3ttoe…
G
A great artists also said-
“There’s nothing wrong with having a tree (or AI) as …
ytr_UgyKsQAnS…
G
Well for one thing even toddlers without any prior knowledge can solve puzzles w…
ytr_Ugwf-UDK1…
G
I likw the outdoor activities but i despise the AI replacing teachers and childr…
ytc_UgxDssnDJ…
G
When they are fully AI-Autonomous, these units will be deployed, first, on Moon …
ytc_UgzcSI5dW…
G
Are we sure that the UFOS are not intelligently controlled by AI being or things…
ytc_Ugy2mTsSH…
Comment
One thing I didn't understand is why at 31:15 when he is asked to reflect, is he says AI will do good things but call centres will become efficient and he is worried what those workers will be doing. But earlier he is talking that AI will overtake humanity. So why was his reflection about such a small thing? It gives me the impression that his outlook on the grandiosity of AI is not consistent? I know its a small detail but that confused me...
youtube
AI Governance
2025-06-17T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyBgRQls3umiTBdE4F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzKJrI_YWlwrjnSiLJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2QRg_VAeLzCNo1yF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDRq6T_QrPQ7hq8BJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVICtxjQ43YrpOSXt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz0N9kADycqEmpyqZV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwRJRzEc_nMu0Agf-p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzguoMuQhlQugaawYV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzMIPjjYr_QX1s-B7J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw7TKGcPxEOwDMmDx14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]