Meta has begun deploying a new artificial intelligence-driven support assistant across its platforms, marking a significant shift in how the company handles user support and content moderation.
The tech giant said the Meta AI support assistant is now being rolled out globally on Facebook and Instagram, offering users round-the-clock assistance for account-related issues, including password resets, privacy settings, and profile updates.
According to the company, the assistant is designed to move beyond providing suggestions by taking direct action on behalf of users. It can help report scams and impersonation accounts, explain why content was removed, guide users through appeal processes, and track outcomes. The tool is integrated within the apps and their Help Centers, allowing users to access support within seconds rather than navigating traditional help channels.
Meta noted that the system typically responds in under five seconds and has already received largely positive feedback from early users. The assistant is available in all languages supported by Facebook and Instagram and is expected to expand further, including support for login-related issues in more countries beyond initial rollouts in the United States and Canada.
Beyond customer support, Meta is accelerating the use of advanced AI systems to strengthen content enforcement across its platforms. The company said these systems are being developed to more accurately detect and remove severe violations such as scams, fraud, and illegal content, while reducing errors associated with over-enforcement.
Early testing has shown measurable gains. Meta disclosed that its AI systems are identifying up to 5,000 scam attempts daily that previously went undetected, while also significantly reducing impersonation cases, with reports involving highly impersonated public figures dropping by over 80 percent. The systems have also doubled detection rates for certain types of violating content, such as adult solicitation, while cutting enforcement mistakes by more than half.
The technology is also being applied to detect suspicious account behaviour, such as unusual login patterns or sudden profile changes, enabling the prevention of account takeovers. In addition, Meta said its AI tools can identify fraudulent websites and misleading advertisements, contributing to a 7 percent reduction in views of scam-related ads during testing.
A key advancement, according to the company, is the expanded language capability of its AI models. The systems can now operate across languages spoken by about 98 percent of the global online population, up from roughly 80 percent previously, while adapting to regional slang, cultural nuances, and evolving online behaviours.
Meta plans to scale these AI systems over the coming years, gradually transforming its approach to moderation. As part of this shift, the company expects to reduce its reliance on third-party content moderation vendors, instead strengthening its internal systems and workforce.
Despite the increased automation, Meta emphasised that human oversight will remain central. While AI will handle repetitive and large-scale tasks, human reviewers will continue to make high-stakes decisions, including appeals and cases involving law enforcement.
The company added that its Community Standards remain unchanged, with ongoing testing and safeguards in place to ensure the AI systems operate accurately and without bias.







Lower NGX trading cost and seek growth through competitiveness