Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@solha4505 fearing AI is too general. What particular actions do we fear that AI…
ytr_Ugwdlb2VC…
G
I have an idea which could be used to develop AI, but I don't want to talk about…
ytc_Ugi1yh8I7…
G
people who defend ai art dont realize if artists stop posting and drawing/painti…
ytc_UgxnrYLpI…
G
I think getting an AI to successfully deal with a surrenduring opponent may be a…
ytc_UgyLa4PNG…
G
Gotta love how fashionable it is to predict doom and chaos. Yes, AI will lead to…
ytc_Ugw_v5WOq…
G
Every time AI says "I understand your perspective" that's a nice way for it to c…
ytc_Ugypzdx3x…
G
Im kinda feeling that we will be having these base models like chat GPT and then…
rdc_n7u2cwd
G
All societies are driven by technology. The clock cannot be turned back. Thousan…
ytc_UgzlQ8atb…
Comment
The first thing to know about ChatGPT is that the paid version (GPT-4) is far better then the free one.
The second thing to know about ChatGPT is it will try to give you a satisfying answer, not necessarily a factually accurate one.
LLMs often show surprisingly human behavior, which also includes just making shit up.
We all thought that a rogue AI will have to break trough several layers of safety measures, deceive the brightest minds to avoid detection, and execute a complex plan to take over the world.
In reality stupid humans use AI to destroy themselves, and hand over the keys to the world willingly. The first thing an artificial consciousness will have to learn is how to facepalm.
youtube
AI Responsibility
2023-06-10T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwlnBtrwVRTLP2T7bh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwK8szUvfWy08CVvON4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxIDvTw_iiUNxofY654AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzPw6qGfYeArcXbWfZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzcDP_jb0VD0nRPt6R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyotn5LC0DwNiCVcd54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwRgBILMyDoVxxzLnd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzxgE165pMA_zSeLM94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWqznGa1a5iIYtiEt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxebunbbBTDeqm3bkV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]