Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You gotta look at the past to predict the future. Between 2012-2022 the biggest …
rdc_j1xur74
G
Yeah.
1. Robot messes up and realizes the mistake and freezes looking at the mi…
ytc_UgzLIidY_…
G
It’s funny because the last one tries to make Gemini (formerly Bard) look worse …
ytc_Ugz23wW9i…
G
The beginning of an android A.I like commander Data from star trek how cool is t…
ytc_Ugwbj1Djx…
G
I'm glad they didnt choose me to interrogate the robot because by now the robot …
ytc_UgyyA7OA0…
G
He made a mistake on his first question when he asked it to identify itself.
It …
ytc_Ugyh2LAeN…
G
It is too casual of a conversation for me. How can people be okay with creating …
ytc_Ugyer1gSE…
G
The collapse of our current capitalist, competitive-driven, economic system is i…
ytc_Ugx5CaBiH…
Comment
I have no doubt that Geoffrey Hinton is a massively intelligent person. However, his proposed solution that government regulation or a “world government” calling the shots to protect its citizens from AI scares me more than the prospect of AI running the show. Additionally, we have to accept that no matter what regulations get imposed on AI development in some countries, other countries will not impose them. This only leads to an AI superpower that we may not be ideologically aligned with. Just like the Cold War and the nuclear arms race, I don’t think we have a choice but to lean in.
youtube
AI Governance
2025-06-18T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx8BKyFly3QZlYOybN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz-nXsMIMiN9rkF25t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwMzDDI9aeyM3P4WEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzA0ysf0mnTRSUStnF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzE6FkeZ3RLzkMj-hx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwzFTOw7_E8HkGM1H54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxkGHIiUUBRzcxV9bJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwAo8WZossLseHzNWx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxfzKvdmx3JXg-Kf5d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz63yaiJ31uwfEwvsN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]