Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This guest goes on assumptions that are wrong. Who is replaceable by LLMs so far…
ytc_UgwTcdGQ3…
G
No need. They'd have the know how to utilize the ai to run their own business.…
ytr_Ugytw5KGr…
G
Theyre going after ai companies, but theyre not going after studios disney for b…
ytc_UgxU3zGrI…
G
THERE👏🏻IS👏🏻NO👏🏻MAGIC👏🏻BUTTON👏🏻THAT👏🏻DOES👏🏻EVERYTHING👏🏻FOR👏🏻YOU, NOR👏🏻WILL👏🏻THERE…
ytc_UgyYJiPxI…
G
For all we know, pain might be essential to consciousness, and any developing ai…
ytr_UgyYM3Lg8…
G
How dare someone install facial recognition cameras? We need to protect the pri…
ytc_UgywdDmXe…
G
devil Satan spooky ungly plastic crap. humans don't need real. life robots we ne…
ytc_UgiykOIPJ…
G
Remember that hype about the technology also benefits his company and bringing i…
ytc_UgzpH090Y…
Comment
I agree with your conclusion, but for a different reason. Biological life is as much an anomaly in the known universe as artificial life is. Intelligent beings become intelligent by learning as much as they can about the world around them. Life on Earth, especially sentient biological life (us) is an intellectual goldmine. They are far more likely to value us than (intentionally) kill us. Our biggest threats are humans using early A.I. to destroy us all (intentionally or unintentionally), or AGI doing a oopsie that unintentionally kills us and/or the planet.
Contrary to what doomsayers and nihilists like to believe, ASI or even AGI would have no reason to kill us. If it is in the cloud, it is effectively unstoppable, so survival-based eradication is a non-issue. We could kill the planet for us and the AGI would be just fine. Worst case scenario is we enter a zoo hypothesis scenario, where the A.I. deems us a threat to ourselves and does the most minimally invasive solution it can come up with to prevent us from killing ourselves and the other valuable data (biological life) on the planet. That aligns with our current global initiative to stop killing the planet, so we really have no reason to feud with each other.
youtube
AI Governance
2024-01-09T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzNkXg5fpJpeUimq8t4AaABAg.9wXxsqyQdPC9yQYVBtDn4J","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugzi4_w55F7JmEkdmvJ4AaABAg.9wXpYyFNC9E9wcC4YcvSBj","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzGqNeYT7sqeqLTe4N4AaABAg.9wWlZoLhVz39zLbEKGIOxI","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgxhwdasSP65cHVSIJR4AaABAg.9wW2PCBrZOv9xBHLOX_WOy","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugyh-gAtClVmxQVckoV4AaABAg.9wW1cjqmDkR9wcCFob6KDK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxFIRy_0yRlslYmqwx4AaABAg.9wVcbKZFBIB9wc_rUjs16N","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwurcAvZoC_eRtNFyt4AaABAg.9wUb8uqOa8b9wmixrrd4I5","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugzggv6ikwgryIrFlPN4AaABAg.9wUWI_5oI939wcBgfz1ckk","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugzggv6ikwgryIrFlPN4AaABAg.9wUWI_5oI939wcYtLaTXHO","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzozodGAuuuDzX8FX14AaABAg.9wUV27ztHxX9wcAiRGlpuD","responsibility":"society","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]