Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A.I. Will become, if not already, the greatest smoke screen, red herring, cover …
ytc_UgwOVJ3Bv…
G
"You have to look at this objectively" says the driverless truck company........…
ytc_Ugzh1P93f…
G
AI does not have to be here to stay. If people choose to not use it the companie…
ytc_Ugw3agUXB…
G
If ai leads to abundance, capitalism must die, we need something new, it's just …
ytc_UgyvSbPTB…
G
If the end result Unless the end result from an AI rendered image is unique and …
ytc_Ugx7ExcJa…
G
I was going to ask this. He could have gotten this info from anywhere else if he…
rdc_nartqp4
G
Weirdly, ive also come to a agreement and understanding with my ChatGPT, there's…
ytc_Ugxmexawh…
G
With the first set of CHAT GTP graduates we have set in motion the obsolete need…
ytc_UgxOK-dI4…
Comment
Artificial intelligence can NOT reason. That is false. It can sift through immense amounts of data and make connections between data points. It can "learn" new actions, sort of. It isn't actually intelligent. It doesn't think. It isn't conscious or self aware. It's a machine. Just a complex machine.
That isn't to say that it isn't dangerous. It is. Giving AI control of important decisions is asinine.
youtube
AI Governance
2023-07-07T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzPlH9VTEnjWCp7ZJR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyNwbS6VzfVpymcbRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwOWhbuocatAITMxx54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyuulqXnUdcuQrAvZR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzA1CfuOuXAt-5HuRN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxR6CflaKtGwp_sFJZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwtlGmbrQJdp5fM0b54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw1VhARGP6x11A_l6B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy2FbFqb0xjVsT0vmR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwmTRylv2veMMcE6HV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}
]