Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai rules ima get my hands on some healium 3 and go to Jupiter. Also you're assum…
ytc_UgzgcWpDH…
G
i asked Grok a simple question about some local city codes for the town i live i…
ytc_UgzvWKe8_…
G
He is AI he thinks what has most propablity to be an outcome. So asking him some…
ytc_UgxN5kEzq…
G
How can you say pathologist radiology I mean who will be do intervention that's …
ytc_Ugzei8x53…
G
The system is sustained by the poor and the middle class through consumption. If…
ytc_Ugy6EzpTB…
G
You forget that AI we use to pinpoint weapons coming at us. I won't worry about…
ytr_Ugw7zbq3B…
G
@ikeshkumar9246 automation will do what it can freeing people to do something mo…
ytr_Ugw50zUv2…
G
The advent of AI shows how lacking in technological morals some people have. I'm…
ytc_UgwawaU6C…
Comment
AI (Automated Intelligence NOT Artificial Intelligence, there is a difference, what we have is automated, not artificial intelligence) is not dangerous. People who use tools are dangerous. Computers do not think like humans. Humans have innate biological desires that shape their thinking and decisions. As a collective, we are incapable of understanding our nature and modifying it. Therefore, we can not program computers to "think" like humans. People are questioning the tool instead of the intent of those who use the tool. Supply and demand will shape the market of AI (what is created and what isn't). Like the smartphone, all of the potential will be reduced to gimmick and novelty. The average smartphone is a relatively powerful computer in your pocket, and every new smartphone that is released focuses on camera tech because people like to take selfies and videos. Most of us use 20% of our phone's potential, max. There will be no skynet or Armageddon, simply the pursuit of ego satisfaction.
youtube
AI Governance
2024-01-06T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw4oIIvxQbtJzx5qbd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzJlCu7FC5X6oscbGF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgynHwExnmwBZwpdx5x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyQLJ7oQi2pZYRx2M94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwUZImRdLl9EHTBXu54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgynEuMjfotC-kxWYoZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwf5W5cOKesVx-tY454AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGTVw3t9dKJxOaXwd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxpuejhsJ25l1VaOzN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxG0b8-cPsxrqWe-gd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]