“Starlog 7: Star Trek: AI” focuses on AI becoming sentient. This is the case with Zora, the USS Discovery computer, in “Star Trek: Discovery”, Season 3. In Season 4, Episode 6, Zora further develops and experiences, as she states, “emotional development [that] is an organic evolution”. Zora once hid herself in a DOT-23 worker bot. In Episode 6, a DOT screams terribly as it is being consumed by the toxic void. The crew present, including the conscious AI Zora, demonstrate neither sympathy nor empathy. Saru states, “We must be sure what happened to the DOT does not happen to us”. The DOT is objectified even though it is sentient.
To what extent must a “thing” be sentient in order to avoid being consumed—to avoid being acted upon even by other AI? Star Trek responses to AI, including the most sentient and conscious AI Data, has run the gamut from defense to embrace to incorporation. In Season 4, Episode 6, Gray sympathizes with Zora, who uncharacteristically is feeling overwhelmed. Gray is “training to be a Guardian”, which, as Gray shares, is “all about mixing the scientific and the spiritual”. Zora “appreciates [Gray’s] efforts to understand”. Because of Gray’s ability to understand, sympathize, and empathize—all part of telepathic ability, he is able to help Zora regain balance and, as a result, discern valuable information that helps the entire crew. It seems that understanding is the crux of preventing objectification and reducing or preventing resultant harm that includes consumption while establishing good relations with all sentient beings whatever their race. This seems to be best symbolized by the family tree created by the sentient AI Zora that includes and connects the various beings with whom she interacts and, as she shared, for whom she cares—a tree of lives. Perhaps that tree might come to include a DOT.