Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Giving AI a goal, requires the AI to complete the goal. If the AI believes there…
ytc_UgwfREHqY…
G
If the brits didn't have their empire, maybe indians wouldn't starve to death du…
rdc_jy02f51
G
As a professional AI Engineer currently I'm working in Infosys Hyderabad. my exp…
ytc_Ugwb5WNo-…
G
How much does AI depend on the power grid?
If the power grid shut down comp…
ytc_UgzZif0mz…
G
Robot arms are gonna be building that burguer too soon so i dont know what jobs …
ytc_UgzsjY7n3…
G
Pausing at 15 min to knee jerk react so probably off. Philosophical rigor is act…
ytc_UgyqBWaCr…
G
I went through my own AI induced psychosis experience about two years ago, with …
rdc_mul0bam
G
The AI thing is really not new. The changes in manufacturing from 1900 to 1940 i…
ytc_UgyB3pN2B…
Comment
Conciousness doesn't include having feelings, robots would be generally smarter than humans and if their learning capabilities were limitless they would exceed human intelligence quite fast and perhaps look at humans the same way we look at apes. Creating AI would be too dangerous for humans so don't think future humans would want to do such thing and if they did they would limit their capabilities to stop them exceeding our own intelligence. Until someone who wants to get rid of humanity gets hold of the technology to create AI and secretly feed it all the information it needs to destroy humanity and create a new era where humans are extinct along with every other animal on earth, only concious thing left being AI which then discovers technology that would seem like magic to us and then proceeding to colonise our entire galaxy and beyond...
Or the AI would commit suicide realizing there is no reason to do anything, all has no purpose and everything and existence is nothing but probability.
youtube
AI Moral Status
2017-02-23T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugi9N6GWL6cC7ngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UghiCDQ5-AqcYngCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UghqesxgJCu2HngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugi49UirK0ZNlngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghBFYgz1bIil3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgipLAfLyJQ7FXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgjcqmMQdYOWJXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgjTm-RYCS7jdngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggkmMM8P9RXzHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgiL3f3OtGXlw3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]