Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why are we still questioning their stance on AI replacing real people’s jobs whe…
ytc_Ugz6QgEbf…
G
I like Shad’s sword videos but any of his videos featuring AI I don’t watch, sam…
ytc_UgyM7H7OV…
G
The worst part is the rate of speed at which the capabilities are advancing so a…
ytc_UgyDrsaZl…
G
No ai butlers? This like flying cars all over again. Give the people what they w…
ytc_UgwMLbBKq…
G
I don't see much difference between this guy and someone who believes in somethi…
ytc_Ugy0s5CXW…
G
As an artist for me even if AI takes over all the artist's jobs there will still…
ytc_Ugwj314Ca…
G
Unfortunately doesn't stop how an ai is trained but it still looks good in the e…
ytc_Ugw29AiVO…
G
I think ai should be used as a tool - and I don’t really care if you use ai art …
ytc_UgyBLDP5Z…
Comment
I don’t think AI will blowup the way we are predicting at the moment. The two main reasons are, 1. We won’t be able to produce the amount of energy required to carry out AI operations to that scale whilst having to provide the energy for 8billion odd people. 2. Human survival instincts will overpower the AI survival and shut it down or control it so that humans still can be the ones that control the world.
youtube
AI Governance
2026-03-31T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy5pIE9HkxIa9ZmiMF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-4bRS7gwuwvt71rN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxb0_zUJWZArqEHmk14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzGxb_8OGjsH-aYlt14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxp7QcDkJBGHOZPqdl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxGmw8VpbKIoVmOp8B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwfIaMuNJqiUwpExPN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFGoe-ilpvLxtBf4F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy_1aPnZBDqHUani4h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLQooXPCOtULsyrSx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]