Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI become extremely mainstream
Pros: work becomes easier
Cons: millions l…
ytc_UgweSLwIW…
G
Crazy how there's backlash but A.I still gets a shit ton of engagement and likes…
ytc_UgyOVhUb0…
G
😮 did you know😮 we can make AI out of human brain cells they're called organoids…
ytc_UgxciINIh…
G
Yang warned everyone about driverless trucks, taxis, etc, some years back. He sa…
ytc_UgytpMCyC…
G
Whatever you do, don't talk to the character ai ai, Minori Shido, you will regre…
ytc_Ugwd3d8P9…
G
A.i will only get better and humans will only get dumber. Not a recipe that end…
ytc_UgyhWCtVd…
G
In all honesty… AI is simply a mirror. With or without it, you wouldn’t be more …
ytc_UgyTkpJG3…
G
It infuriates me how people don’t want to accept that generative ai could never …
ytc_UgwOvbMjK…
Comment
Summary from ChatGPT 😄
The video transcript discusses the potential benefits and risks of AI technology. It highlights the risks of weaponization, disinformation, job displacement, and manipulation. Regulatory intervention is suggested to mitigate these risks, including licensing and testing requirements for powerful AI models. The importance of transparency, accountability, and impact assessments is emphasized. The concern about AI's impact on elections and the need for content creator protections are mentioned. The idea of establishing a dedicated agency to regulate AI is debated among the participants. The transformative nature of AI and its potential military applications are also touched upon.
youtube
AI Governance
2023-05-23T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwUTSovxCl-KggMAgR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxqa4Ns_9tRQ27_eVt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxsiY9R3pvwELtV3-94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw8HibISMMnIIHBcAB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz65mbQ1n2ldaYVINB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwkiMX7jdJrv-ytW4p4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzBoHawMd1VL6u4zmp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxzaqUN_1IZWKZrnvN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxZ6hz7MuEF35uNUCx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwaiGcfjCk_7n_4o1l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}
]