Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
GOVERNMENTS are being BLACKMAILE in Africa to FORCE their people to TAKE BIOMETR…
ytc_UgzOjGmTo…
G
even beginner art is better than ai
literally EVERY piece of art is better than…
ytc_UgzfzKV-S…
G
even if AI was ethical (idk how) i would never join a community so hateful…
ytc_UgyU4mgWh…
G
Thank you!! Just the topic we need to address now, immediately!
Your podcast is…
ytc_UgwOLEslm…
G
Highly recommend the full report that this video is based on which has detailed …
ytc_UgxeLuVKy…
G
The fact that product management is extremely weak in most companies and that no…
ytc_Ugwyy3DC-…
G
It said “apple” when I say “is ai gonna take over the world in the future” and t…
ytc_UgzGTSGUN…
G
But I just want Google to give me a simple straight answer that sights it source…
ytc_UgzjiPTuz…
Comment
I do not understand... Is investing so heavily in AI really so important? Is this because of the paranoia of power? Perhaps the first to develop it will be the one to wield control? This is why this is crucial in eyes of many? I would think slowing down development and focusing instead on engineering and automation and species maintenance would be more ideal... Even if AI would be better at engineering, would there not be that much many more errors? THen we would have to go back and fix it. Maybe we may have huge condemnation and sanctions against states that break the laws that will be made in the future in regards to limiting AI/super intelligence. Its all an endless problem anyhow, right? That is what they have said, the engineers, no? I pray the Universe, Cosmos, the Divine Mind or minds keep us on track in accordance to its will and great plans and blessings for us, for our own wills to be in alignment and allegiance (alliance?), and for the strength to accept as well as change what we may. ... ( This is edited part of note, but up to now, I believed Mr. Balaji had not been had a hit out for him, but maybe he indeed did! It just makes it cloudy, less clear because he was so angry instead of dealing with and expressing his beliefs (against this massive wave) with clarity. That may have brought him great regret and remorse and caused him to take his own life. But when I observe how my sanity and goodness slips because of the lack of access to the things and processes that make us healthy humans beings... and hearing this information from the podcast... I can see how he may have been murdered by someone hired by someone who really, REALLY wants control as well as having in their deepest fears that their existence is in knots with all this AI crum. Not to knock superintelligence because it does have its brighter sides. And anyhow, why program the A.I. to preserve itself so highly? Should program it to protect and prioritize humanity and bio-geo life. Could just "make it" again if lost, right? Or err... is that like way too much work. Those were like a gazonquillion amounts of 0's and 1's we have all surpassed to get here...
youtube
AI Governance
2025-10-16T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyQmsLkncewgAKwrXJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTi_kTJGsL2YfmUIB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzlnObI569QxdszLRJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxc6stdh-YCS7XKgVR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBqQax-HHaF0CqEBp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyq3ReNhkeRCRWIkT14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSlyEFo7aym7VkRpx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxP3BwHMyo4eRKqtrh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyXssoZmmrJxgvgyDp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzF_gJ-Q8ExVaCN5OB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]