Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just like Clark Kent, who embodies the potential for both good and evil, AI has …
ytc_UgxlnZlqM…
G
The prediction about AI taking over half of entry-level jobs is quite alarming. …
ytc_Ugzm1X16Z…
G
Maybe a self-aware AI would want to conceal its self-awareness. It would hide se…
ytr_Ugxpq4ftN…
G
Wisdom is a result of knowledge acquired. Saying that a robot can’t be wise is n…
ytc_UgxNGQ1w_…
G
Je saisis l'importance de tranquilliser le public, notamment la génération des b…
ytc_UgwcsUGph…
G
Remember a example of movie
In Mcu , tony had Jarvis, together with banner the…
ytc_UgxpYRq-C…
G
That’s fake because robots can’t have emotions and they are made by humans we ha…
ytc_UgwewJQfE…
G
Σε λίγο θα μας κυβερνούν τά ρομπότ?? Και ίσως το ανθρώπινο είδος να είναι σκλάβο…
ytc_UgxzzYu2J…
Comment
hypothethical "ai" can be conscious, but i doubt how would one make one, or what "ai" even is.
Chat gpt3,4,5 or any other probability algorithm can't be conscious because it is just algorithm. It has same "intelligence" as a coffee machine but has more data and more capabilities based on what it is used. For example therei is one algorithm that controls multiple nuclear powers and has goal to genocide humanity. But there is not any intelligence there, it just does what it does based on biased probability from biased limited data.
youtube
AI Moral Status
2024-05-20T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgytPQetGNOE3d1gyBt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"ytc_Ugxb-ySI4wzmrFcOjE54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"ytc_UgwUROeWn0BgCUPSvQx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"regulate","emotion":"unclear"},{"id":"ytc_UgzpTKNM7lKcO79MZHt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugxn-afWIhJ6yBxceaB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugzj33EGw1jVOgRolGp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyTtrP37CmE4H4NqWl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwCMfRdOE1xVZlWmul4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwVOes8n8WtG_nZXhd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},{"id":"ytc_UgydLjrQRDwNlHr8M2B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}]