Overview

Who’s this for? Parents of children and young adults

What was the Problem? The internet is a deep dark scary place without parental controls

How long? 3 days

Why is this in my portfolio? How I handle design challenges. It’s thorough process on a short timeline

What was the Scope? Research, systhesis, ideation, wireframes, hi-fi mocks

The design brief

pexels-mary-taylor-5896446.jpg

Kids of all ages have access to the Internet through the devices and applications that they use on a daily basis. In a perfect world the parent will have control over what content their children are able to access. But we know that we’re not in that perfect world.

Imagine that a child would like to “search” for the next best toy, video game, sneakers, etc. If they use traditional search tools there is a possibility that the child might stumble upon less than ideal content, even with parental control features.

Design a digital experience where a child (ages 5-16) will be able to search for items/things with the goal being that their parents will have 99% confidence that their child won’t be subjected to questionable content. Keep in mind that children could stumble upon questionable content in many different scenarios and situations (in game up-sells, search engines, mobile apps, web apps), and while using different devices. There is no restriction in what you decide to illustrate but what you select will be considered.

Desk Research

Asset 4@2x.png

I started the project by asking a few key questions that I thought would have an impact on design decisions. These included:

• What are the most common devices and Operating Systems that the target audience uses to search?

• What are they searching for?

• What are the most common apps/social media platforms that they use?

• What is questionable content?

• What are the current monitoring software available to parents?

What are the devices used by children?

unnamed.png

Smart TVs

 
switch.png

Game consoles

ipad.png

Tablets

 
amazon-kindle-png-picture-376709-amazon-kindle-png-amazon-kindle-paperwhite-png-500_500.png

Ebook readers

Untitled_design__3__copy.png

Mobile phones

 
apple-watch-pcq.png

Smart watches

Screen_Shot_2020_03_18_at_8.04.19_AM.jpg

Computers

 
402087-smart-speakers-amazon-echo-dot-4th-gen-10015594.png

Smart speakers

What do children search for?

 
 
Desktop Copy 3.png

Video Content (17.25%)

Desktop Copy 2.png

Translation services
(13.58%)

Desktop Copy.png

Social Media
(9.88%)

Desktop Copy 4.png

Education
(9.86%)

Desktop.png

Computer games
(9.09%)

Desktop Copy 5.png

Porn
(0.74%)

 What are the most common platforms?

 
Asset 6@2x.png

What is questionable content?

pexels-katerina-holmes-5905947.jpg

Questionable content can be a tricky subject to define. Depending on who you ask the same content can be questionable or completely acceptable. While some online threats such as sexual predators and self-harm topics are considered almost universally taboo, others such as sex and violence can be considered grey areas.

Questionable content can also vary wildly with age, as something that is considered inappropriate for a 6-year-old might not be for a 16-year-old. Some questionable content can be:

• Porn
• Sex
• Scantily clad people
• Suggestive poses
• Erotic literature
• Explicit music
• Hentai
• Peer pressuree
• Self harm
• Bullying
• Sexual predators
• Substance abuse

• Suicide
• Cults
• MLMS
• Fake/excessive advertising
• Unhealthy body images
• Violence
• Nudity
• Language
• Extremist behavior
• Racism
• Hate
•Criminal/antisocial behavior

What are the current filteration methods?

 
Desktop Copy 7.png

DNS level blocking

Desktop Copy 8.png

OS Parental controls

Desktop Copy 9.png

Browser plug-ins

Desktop Copy 10.png

Router level blocking

Desktop Copy 11.png

App monitoring

Desktop Copy 12.png

Personal supervision

 Insights

Asset 7@2x.png

• The internet can be a dark dark place.
• Blanket bans often do not work.
• Abstinence ostracizes.
• More monitoring is not equal to better parenting or healthier upbringing.
• Most children are aware that they ar e being monitored.
• Parents are aware that children can look up how to switch off the parental controls.
• Monitoring is often trust based.
• Questionable content changes across various age groups.
• Upon realization, some children are not comfortable with the fact that they have online persona’s that have been curated by parents or institutions (e.g. schools etc.)
• Some parents use devices for immediate gratification and these devices often act like babysitters.
• Questionable content can be found anywhere, there are no safe spaces.

Synthesis  - Parental Archetypes

 

Invisible Crocodile

Use media and technology for a large portion of their babysitting needs. Will play a video/give the child a tablet to keep them entertained.

Paranoid Penguins

Authoritative/conservative. They would like to keep their children away from anything they consider inappropriate for as long as possible.

Helicopter Hawks

Over-protective and overbearing. Helicopter parents almost hover over their children to meet all of their needs/requirements/demands

 

Wise Owls

A balance between monitoring access and trusting the child. They are aware that the internet can be both harmful and empowering.

Hippy Hippo

Extremely liberal. They keep an open relationship with their children. Trust their children to make and learn from their own mistakes.

 

How Might We…

 
 
Artboard Copy 7.png

HMW create a safe virtual space that shields children from objeectionable content across platforms and devices while allowing for privacy, autonomy and freedom?

 

HMW create a space that grows with the child and allows for flexibility in parenting style?

 

Ideas

pexels-andrea-piacquadio-3820210.jpg

• New Mobile OS
• App based monitoring (App has all permissions granted on child's phone)
• Access to devices only in living rooms/ no bedrooms.
• Software to flag inappropriate content for parental review without blocking content
• Devices that are shared completely. Everyone's device can access other devices
• Content filtered based on parental choices.
• Kid gets reward for flagging inappropriate content.
• App/web-based resource for education on questionable content for parents/kids
• Helpline/Virtual Assistant for child to talk to about concerns
• Router based DNS blocking
• Router Based AI Blocking
• Device based AI Blocking
• Browser plug-ins
• Communication forum for parents to share content/curate lists etc.
• Social media platform targeted at kids only. Physical ID verification to register. Moderated.
• Child only phones with locked permissions and child age appropriate applications

What I went with

pexels-gustavo-fring-3912428.jpg

The idea I decided to go ahead with was AI based content filtering and monitoring. The biggest reasons for this were:

Semantic understanding of content is important. Excessive control can be harmful to the child as studies have found that use of monitoring apps can actually lead to ostracization of the teen.

Predators and questionable content can be found on any platform. Instagram, Youtube, TikTok all have their share of questionable content. Simply blocking entire platforms/websites is not a healthy solution.

Parenting styles differ wildly from family to family. It is important to allow for flexibility in monitoring to best serve the end users.

Since the target audience ranges from 5-16 years of age, it is important to have a system that understands the content so the filtering can change as the children grow into young adults.

 Feasibility

Software developer - Systems

Asset 12@2x.png

Software developer - computer vision

Some things I heard:

“TLS encryption means you will not be able to get any data beyond IP addresses. Content-aware filtration will not work at the router level”

“You can use NLP and multi label classification to flag data. NSFW filters already exist for text and images. Video and audio will be tricky.”

I spoke with two technical experts to get an overview for how a system like this might be implemented. The initial idea I had was to implement it at the router level so any content passing through the router at home could be monitored.

However, due to encryption, it would be impossible to read data beyond domain names at the router level. Since I did not want block entire websites I decided to implement it at the device level instead.

I was told that it would be possible to use Natural Language Processing and Multi Label Classification to implement content aware filtering. The AI would generate labels for all data. These labels could then be used to flag content that was deemed inappropriate by the parent.

To actually implement it at the device level, I decided a standalone browser app for mobile devices along with browser plug-ins for Chrome/Firefox would work. The parent could install the standalone browser on their child’s mobile device and the plug-ins on any computers the children had access to.

The parent will be able to control the system with an app which will allow them to add/modify flags and monitor activity. For the purpose of this exercise, I decided to flesh out what the app for the parent would look like.

 How does it work?

Asset 13@2x.png

The product uses real time filtering based on NLP and Multi Label Classification. All searches are monitored for inappropriate content. Parents can choose what content they would like to be made aware of (flags) and what content they would like to block. The system allows for great flexibility as it lets parents know when their child is at risk. It also allows for the a safe space that the child can feel confident exploring in.

This works better than current monitoring technology as it is not constant surveillance where the parent is manually going through all of their child’s search history nor is it blanket bans on entire websites which can make the child feel like their freedom is curtailed.

 User flow

2x copy.png

 Lo-fi sketches

Asset 8@2x.png

 Annotated wireframes

 
Asset 2@2x.png

1) Profile creation font for parent. Basic details.
2) Profile creation for child.
3) Auto generated icon to establish the identity for child. Same icon is used to refer to this child throughout the app. (Can be replaced with photo)

4) Add flags to monitor content. Flags are recommended based on age group. Parents have an option to flag, block or ignore.
5) Image from step 3.
6) Flagged content changes color to green.
7) Toggle between Flag and Block.

8) Blocked content goes orange.
9) Summary page for child 1.
10) Card with all flags for child 1. Option to edit.
11) Option to add profiles for more children.
12) Prompt to install app + browser extension.

 
Asset 3@2x.png
 

13) Summary of all registered devices.
14) List of devices registered to each child.
15) Activity monitor. Lists details in a scrollable news feed style format with a card for each flag.

16) Current tab. Review takes parent to content that has been blocked. Urgent is extremely worrisome searches.
17) Card with details of flag.

18) Monthly and yearly overviews to help parents see trends and patterns at a glance. Option to see details.
19) Card with details of most flagged/blocked content and total flags.

 Hi fidelity comps

Asset 10@2x.png

 Possible next steps

Asset 14@2x.png

During the project I made two big assumptions:

Assumption 1: Parents want an intelligent monitoring system that is aware of content being consumed and do not necessarily want access too of their child’s online activity. In essence, parents want their child to be safe while still being autonomous.

Assumption 2: The “safe but private” value proposition is enough to make people want to switch browsers. For possible next steps, I would look at validating these assumptions with some quantitative and qualitative primary research. Although I had a cursory glance at technical feasibility, the practicality of the platform from a monetary as well technical standpoint also needs to be established.

What did I learn?

 

The brief was a great starting point as I felt that it was concisely worded and allowed for a lot of flexibility with respect to possible design directions. I am happy with the content aware system that I worked on as I think constant monitoring can feel like a breach of privacy and be counterproductive. On the other hand, the internet is a scary place and there needs to be some monitoring to ensure that the child is not exposed to content that might cause them bodily or mental harm.

While the system works well within the app/plug-in enabled browser, it does not solve the problem of questionable content found on other devices (X-box, PS etc.) It also does not work for content that the child access through other apps like TikTok and Snapchat, which can be a place for bullying and other questionable content. Given the timeline, the proposed system still comes with an asterix in terms of technical and practical feasibility as I was told that cloud based real time monitoring can be prohibitively expensive.

Previous
Previous

Meld- AR, UX and ID (HMW connect grandparents and grandchildren, remotely)

Next
Next

Theia Vision (An app for people living with visual disabilities) 2023 NLP Update!