In a blog post, the company's general counsel Kent Walker said it would increase the use of technology to identify terrorist and extremist content.
"This can be challenging: a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user," Walker said.
He said video analysis models had been used to find and assess more than half the terrorism-related content that Google had removed over the past six months.
Google also plans to pay 50 charities to search for, and flag, terrorist content.
Additionally, there would be a tougher stance on videos that did not clearly violate their content guidelines but had "inflammatory religious or supremacist content", making them harder to locate on the Web.
In March, many big-name companies in the US and UK pulled their ads from YouTube and the Google Display Network after it was noticed that they were showing up on videos that contained sexist, extremist and racist content.
And, Walker said, YouTube would expand its role in counter-radicalisation efforts. "Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the 'Redirect Method' more broadly across Europe," he said.
"This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages."
Walker said Google would work in collaboration with other companies like Facebook, Twitter and Microsoft "to establish an international forum to share and develop technology and support smaller companies and accelerate our joint efforts to tackle terrorism online".