Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wouldn't you just fine tune and embed your LLM to prevent hallucinations? Sounds…
ytc_Ugyahjnas…
G
She's just another leftist doom-shouter. So, how are you going to stop China? We…
ytc_UgyhWIT9y…
G
@thewannabecritic7490i love how this reply is more coherent than the original co…
ytr_UgznuXuR7…
G
AI is what can bring us into a post scarcity environment. Quit being a luddite a…
ytr_Ugz01_5_H…
G
Electronic concentration camps are not a new idea. Hitler did so, with tattoed n…
ytc_UgzPHW-M2…
G
A machine can simulate to have emotions but it won't because it doesn't have the…
ytc_Ugx0aliE3…
G
Automata that have subatomic level, nano-sized particles, maybe like T-1000s, th…
ytc_UgxGVNuMb…
G
I could be wrong, but AI will never be able to step into the imagination that hu…
ytc_UgygsKX0X…
Comment
It's a tool with great promise and value but unfortunately will be able to do great harm in the wrong hands. The problem that I see is how would the nefarious uses be controlled? If a bad actor has a motivation to do something harmful and can figure out how to use an ai tool to do it, then they will. Also, if someone develops an ai process without ethical safeguards, then the program itself could cause harm without further human intervention. Either way, once the technology exists, it will be misused or misapplied either intentionally or accidentally, that's a certainty.
youtube
AI Governance
2023-06-19T19:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyzPa9bonBdK2IiZxB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwbm70vDBK0mQaRLhp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwrW3zBlH7pFsgoYBd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyfIDdwXHlKU9eY_-94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw1xMkcno0SVh_vTVN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzulch9DrC03TLTPZ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgziWUKUoBIvXwie-614AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyGIxJe2odDj5qZ7oB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy48er0dIo0d7qCLJJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxGJMyE5DDKEwGfV1N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]