Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Business does NOT exist to provide you a JOB! Business exists to provide a PROF…
ytc_UgwTHlQkv…
G
The text discusses the importance of understanding AI ethics as AI becomes more …
ytc_UgyccQ-Qq…
G
I have and yes is the answer. It took about half an hour of questions to get the…
ytr_UgzZQrVs4…
G
This presenter was doing ok then she proceeded to virtue signal on how Elon Musk…
ytc_UgzM6ZOjt…
G
Law makers will be using AI to come up with there solutions as well it’s already…
ytc_UgxRUsQDf…
G
Anything to do with information management will become automated over time. AI w…
ytc_UgxLTYk60…
G
mmm this shit smells bad, reminds me the beginning of the robot revolution like …
ytc_Ugi67CJYW…
G
AI systems like ChatGPT don’t have desires, intentions, or agency. They respond …
ytc_Ugx8P4IGw…
Comment
According to Jeffrey when different groups of AI work together they are more unpredictable and dangerous, I assume it is so because AI does not have a sense of right or wrong, good or bad, it might not take into consideration on it's own the negative effect of achieving a given task especially in military settings and when used for terrorism. So what is the solution? mass panic? definitely not. The solution is international monitoring and regulation by federal governments.
youtube
AI Governance
2023-05-05T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgypCXKCzQsXsgIsezZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzPsq25QlhrDmD7SBV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPgZPeukKAb6A12EN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMObPFnILwBlwORqp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzkeZWFibObjcgjHXN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXs5en3K1hWJcip5B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwV0C3pXwoII3dJykR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwhZ9aT6BQXa4avag94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlpdV9Ks2c8sMOuLJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZBZjirUjoBNmLRvB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]