Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Scary! That’s why I never wanted children but you can’t escape it. My sister pas…
ytc_UgxPBwIb7…
G
@D@Dave-cf4vd For AI to see humans as a competitor, it must have a goal, a sense…
ytr_UgyFe25Rt…
G
Nice editing still they can't figure out how a robot can balance itself like hum…
ytc_UgykfzNly…
G
The irony of shad brooks pushing AI for his art is that his brother is the very …
ytc_Ugy9nor6N…
G
All of the world's topmost AI scientists (not CEOs but scientists) say that AI i…
ytr_UgztpFaEc…
G
if you think about star trek for a moment, advances in computers made cognition …
rdc_fcs733g
G
using artificial intelligence to perform actual idiocy instead of something actu…
ytc_UgxwuWezs…
G
I dunno all the AI apps act like search engines with crappy personalities. Hones…
ytc_Ugw9mqw9t…
Comment
The Most Intelligent Artificial Intelligence
AI is not about superior processing power alone, but about the integration of morality and goodness into its core structure. The most intelligent AI would be one that is built on fundamental moral principles, such as:
Reducing unnecessary suffering.
Stopping war
Creating a healthier people and world.
Being honest.
Not stealing.
Upholding justice and fairness.
Respecting the dignity and existence of all living things.
Ultimately, an AI that builds a better world for everyone, guided by a strong moral compass, would be A.I. at its highest intellegence.
youtube
AI Governance
2025-08-03T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx1IdncVO6V0tEVBP54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgykvHMKRf7_4mnE9CR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxzJtDMvRKgyc7H5Oh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyS33dXhsWtzakGEht4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyqCcCG16yD-82UAUV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNUGqKEbxNJIHXSnl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzLNPf1ctOUattOEm54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxJzqoP95UlBkY_lmp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwLFQC_dMGMgrjn7nx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzSeNusJQ0DAI49WdZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"unclear"}
]