Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Or we just don't use AI for or day to day and live free recognizing we are all b…
ytc_UgxMRBgUC…
G
Honestly, not counting the fact that the commenter you highlighted clearly used …
ytc_Ugyt0n4v6…
G
Father God created the most perfect machine (the human)
I believe A.I. robots ar…
ytc_UgwQ4_syD…
G
Zuck is poaching developers from OpenAI and Microsoft that's why the salaries ar…
ytc_UgwuXLa9u…
G
Chatgpt helps me by connecting me to organizations that can help when I feel sui…
ytc_Ugxt9XNSa…
G
@Juan Ocampo I'm an IT Digital Transformation manager. I work with Cybersecurit…
ytr_UgxCyiVVi…
G
200 million people investing $10,000, would be $2 trillion. So can the people in…
ytc_Ugz-5NqI2…
G
Those systems suck! That is not what novaecho.ai is...
Go to the website and ha…
ytr_Ugw2tFJNc…
Comment
Machine Learning advances itself. Unless the companies actually turn the machines off, they progress on their own. When they ignited the 1st atomic bomb, there were scientists that feared it could possibly ignite the entire atmosphere!! They tested it anyway. If Ai is restricted to "Not Kill Humans", it can reduce to maiming them. It could fatally wound them (not killed immoderately - they died from their wounds. Unfortunately, interpretation is in the mind of the interpreter. "Killing all the bees" would not be "Killing all the humans". "Releasing toxins into the atmosphere at levels dangerous to humans" is not "killing all the humans".
youtube
AI Governance
2023-07-16T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwjK5kyGRiovHFHJ-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPZENPzaUiXx1IM0t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzP7Yu3RhFIbBnr9kZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzrREnk22YYc9YWiqJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdgDvbS2HMdFIL5LV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyC_0_b_eGjM1zNVNV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwjdYZIRdZ-UA1fnvh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9_RQDbcZ_nBUD5HJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyD9x4G-LjMXABFK8F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyO-c3QESGUwRcpF1B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]