Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Replacing a ceo with AI honestly sounds like a good idea, but at the same time I…
ytc_UgxBywksi…
G
(I hope to do things such as GO TO SCHOOL)And if someone slid a robot into my sc…
ytr_UgjBnumoY…
G
Instead of pancake I saw a video like this on dinner where the last image create…
ytc_UgyxnYNj4…
G
“All of mankind was united in celebration, we marveled at our own magnificence a…
ytc_UgzNAtR7n…
G
It's fake intelligence, and has no soul, no feelings, not alive, only has inform…
ytc_UgzvWvlhg…
G
I think the only way to be an 'artist' when using AI is if you're applying your …
ytc_Ugyad9-9l…
G
I learnt this lesson very early in the couple months I started working. Whether …
rdc_oi07uhi
G
The problem with the analysis is that you establish a hypothetical situation in …
ytc_UgizhDQN0…
Comment
he explains well that we are not as smart as we think we are when it comes to AI and we can't predict how a super intelligence will solve our issues in the world, it can very well find solutions to exterminate huge population of humanity as it thinks pragmatic and not emotionally imo, and how greed and power driven our leaders are the race to having a better AI is a race and not a cooperation between countries. this is why we also need world peace first before we go and evolve Ai anymore. In the world so much is happening, wars, riots and alot of bad things that everyone is distracted from the real big threat and it is making AI a superintelligence that can think for itself because it could very much be the end of humanity.
youtube
AI Governance
2025-09-04T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgylYfOQFtlimIMj-FZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzavK5lJJe0-qk4wfR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgwyxZOi9XazN6COB7V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyKE4NFbart1KF4ond4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugw7hraCrDDATMMkbDl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz6faKiUwoNkjotuGx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugw2IPhqVMxmUA7PeEZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxREpTCQYFevy55vsl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzJYEU0Z3Uwb1ejTTt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugz_lQV9d58aP58IpRR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}]