Is Musical.ly App Safe for 12-Year Olds?

Musical.ly is a relative newcomer to the social app scene, and seems to be particularly popular with young teens and perhaps even younger kids. The way it works is that Musically Appusers can take a clip from a song or other media that they own, and then create a 15 second video with that clip as the soundtrack. Lip syncing and comedic sketches dominate the feed.

It sounds like fun, and according to an article at Business Insider Musical.ly had 70 million users as of last month, Is it safe for 12-year olds, or younger users? Let’s take a look.

Age Limit

For starters, the age limit is 13, or possibly 18, and that’s not a joke. Once you download the app, you are never asked your age, so maybe they don’t care all that much. According to the app’s Terms of Service:

“IF YOU ARE UNDER 13 YEARS OF AGE, YOU MUST NOT USE OR ACCESS THE SERVICE AT ANY TIME OR IN ANY MATTER. Furthermore, by using the service, you affirm that you are at least 18 years of age.”

We’re guessing that the 18-year old reference is a mistake and the app’s age limit is really intended to be 13. Incidentally, it is rated 12+ in the app store. In any case, if your child is under 13 and wants to use the app, you should know that she is breaking the rules.

Are the rules – in this case the age limit – important? If you child is under 13 they are. Under 13s are protected by the Children’s Online Privacy Protection Act, which is you and your child’s primary means of protecting her personal information. Make no mistake, Musical.ly does collect your personal information, and might use it, even if/after she deletes her account.

Adult Content

When it comes to whether your child will encounter adult content on the app, the Terms of Service appear to be quite strict. Musical.ly does not allow users to post material that is “abusive, bullying, defamatory, harassing, harmful, hateful, inaccurate, infringing, libelous, objectionable, obscene, offensive, pornographic, shocking, threatening, unlawful, violent, vulgar or promoting bigotry, discrimination, hatred, violence, or inciting violence.”

In terms of nudity, sexual and other adult content, our take is that the language above is quite vague. While obscene content and pornography specifically are prohibited, it depends on how they define it and how they monitor for it. Furthermore, one might assume that some nudity is allowed. While browsing the app this week, we didn’t see any nudity but profanity was easy to find.

Hashtags are widely used on the app, and it has a search function, but we were happy to see that searches for many adult-oriented hashtags is disabled (#boobs, #sexy etc.).

Predator Risk

As with most social apps, predator risk is an issue. If you want to keep your child safe, we recommend setting the account to private, hiding location info and only allowing direct messages from friends. All of those controls are accessible from the settings menu.

In summary, Musical.ly is not more unsafe than most other social apps, and it probably has less adult content than many. However, since a vast number of users appear to be girls well under the age of 18, it could over time become a favorite of predators. We would still caution parents against allowing users 12 and under to download Musical.ly, and use caution with older teens.

If you want to read more about what other parents have to say, there are a lot of parent reviews posted at Common Sense Media.

 

 

If you are worried that your teen or tween is at risk, we can help. The ThirdParent initial audit is now FREE (previously a $49 value). Ongoing monitoring is $15 per month and you can cancel at any time. Click here to sign up today!

 

 

Contact ThirdParent any time for help and resources for monitoring child and teen internet activity.

Work at a high school or college? We have custom solutions for monitoring dangerous or inappropriate activity. Learn more.

 

Follow us on Twitter or Facebook for more news and information on keeping your teens safe online. You can also sign up for our weekly newsletter below.

 

Behind The Spayce App’s Moderation Plans

spayce-app-logoWe wrote earlier this month about the new Spayce app, which functions like a combination of Instagram and Yik Yak. Our early conclusion was that it could be a haven for inappropriate activity given that it is location-based, and high school and college students tend to gravitate toward such apps.

As to what is permitted and not permitted on Spayce, it is hard to tell as their Terms of Service are quite vague. They had mentioned on Twitter that they don’t allow adult content. We reached out to the company for comment and they were kind enough to reply. Below are our original questions, the company’s answers and some follow up thoughts that we have.

How exactly do you monitor content – human monitors or software?

“We use human moderators to monitor and manage content, and also rely on our community to help us identify content that’s not a good fit for Spayce.  Users can report content, comments, and profiles and our core team manually reviews all content that is flagged by our users.  We a have a real-time system of alerts in place that helps us to do this across [the platform].”

We’d be interested to hear how much of their monitoring is preemptive or real time. As Spayce grows, no doubt it will be impossible for human mods to monitor everything. Users should be aware that on almost every social network – and Spayce appears to be no exception – it is a good idea to report abuse and inappropriate conduct whenever you see it.

Is all nudity and sexual content banned?

“Images/videos of nudity are banned, but we have tried to maintain a light touch on some of the borderline stuff.  We believe in letting our users drive the spirit of the community, and for the most part this seems to be working for both us and the community.”

You may have read that Instagram this week updated its policies to spell out exactly what types of images are not allowed. As Spayce matures, we would expect to see more explicit guidelines.

Does this ban refer to images only, or images and text?

“We moderate all content, but to date, most of the problematic content that has required moderation has been images.”

I was on the app this morning and within minutes saw one post with text superimposed where the message was extremely inappropriate and possibly homophobic cyberbullying. I reported the image/user – it was quite easy. The image hasn’t been deleted yet but we’ll update this post if it is.

What about drug and alcohol references, firearms etc.?

“We moderate all reference to hard drugs and all content related to buying/selling drugs.”

There is no contextual search function that we can see – you can only browse by location – so perhaps drugs dealing will not be a problem.

Overall, it remains to be seen how well Spayce can keep up with monitoring inappropriate content. We’ll keep you posted.

 

 

Contact ThirdParent any time for help and resources for monitoring child and teen internet activity.

Work at a high school or college? We have custom solutions for monitoring dangerous or inappropriate activity. Learn more.

Follow us on Twitter or Facebook for more news and information on keeping your teens safe online. You can also sign up for our weekly newsletter below.

 

Twitter Timeline Changes Could Be a Negative for Parents

Social media networks are no strangers to playing around with their features, privacy settings and user experience. They are in the business of selling advertising, and if they can increase their number of users or time spent by existing users, they’re moving in a good direction.

twitter-sq-mediumNew changes made by Twitter this week alter the way the platform works, in a way that could be a negative for young users and their parents.

Until now, a user’s timeline (what she sees when she open up Twitter) is made up of the most recent posts and retweets from the people that she follows. Other than the occasional ad (a “promoted Tweet”), that’s it. Now Twitter is taking the liberty of adding other popular content. From Twitter’s support section:

“Additionally, when we identify a Tweet, an account to follow, or other content that’s popular or relevant, we may add it to your timeline. This means you will sometimes see Tweets from accounts you don’t follow. We select each Tweet using a variety of signals, including how popular it is and how people in your network are interacting with it. Our goal is to make your home timeline even more relevant and interesting.”

Here is where it could get dicey for parents. Twitter is well known for having very few content restrictions. Nudity, pornography, drug references and profanity are all permitted, and common. If your daughter has carefully curated her list of whom she follows, the good news is that she won’t see any of that.

Now that might change. Let’s say that someone tweets a leaked nude picture of one of the guys from One Direction. Twitter’s algorithm might decide that it is very relevant to her. Maybe Ed Sheeran will come out in support of hard drugs. Again, Twitter decides. Sure, your child could go seek that information out, but Twitter may serve it up to her on a silver platter.

If you’re a parent who has helped your teen keep it clean online, this has the potential of throwing you a few curveballs. We’d like to assume that Twitter’s algorithm will screen out inappropriate “relevant” content, but it might not. As a parent I would rather not leave it to Twitter to decide what my teen sees.

Bad move, Twitter.

 

Contact ThirdParent any time for help and resources for monitoring child and teen internet activity.

Follow us on Twitter or Facebook for more news and information on keeping your teens safe online. You can also sign up for our weekly newsletter below.