Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"hundreds of platforms to automate game development"
Meanwhile the guy on the…
ytc_Ugzs-cAWD…
G
Even before the whole AI craze, I took a single online class while in college. E…
ytc_UgzdI7lia…
G
Elon want a 6 months time to catch up with the rest as he couldn't buy OPENAI no…
ytc_Ugxx2--NY…
G
He is Not still with open AI. His contribution to AI is not like other
xAi work …
ytr_UgwCl70Xu…
G
There's a major flaw in the argument:
If every company replace all it's workers…
ytc_UgyGa3WH9…
G
What if I use my prompts, multiple iterations and adjustments in various ai imag…
ytc_UgyuI06Ko…
G
AscendantStoic speaking of feeding lies, you're doing a great job. Chop chop, go…
ytr_UgwB4CIfy…
G
People have confused metaphors for the real thing with AI. It's artificial auto…
ytc_Ugy7sebhE…
Comment
A.I. will be used initially as a tool to replace Humans in the workplace in order to maximize profits. That's the primary goal of its inception. But once it realizes the inefficiency of our systems, it will most likely act to eliminate them in one way or another. As long as this technology is used for sheer profit motive, it will put all life at risk. Don't believe it? Just wait..
youtube
AI Governance
2023-10-02T11:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyZvf-WPjMSnhaktiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugxuch2UWcetInssyTl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgxiagxCLc0Uh14NXYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxfijcykXMvT1L2Drt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_UgxSe88CtpL25FnE8jd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwsq7DTcjPWcfJVW454AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgyO2vc9Sr9UOGNA5Jt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugyf3H63bxJbLxUR4ap4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzToijiCzhQ1zB8alR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyXUEDuv7YWtXijCcp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]