dynamic rendering

小结:

1、

Google has to process JavaScript multiple times in order for it to fully understand the content in it. This process is known as rendering. 

 2、

Ironically, JavaScript development and SEO are often at odds with each other. JavaScript makes websites fun and interesting to use, while SEO makes them available for people to find in the first place.

Server-side rendering (SSR) was created to make them both possible.

 

Render Javascript With Search Engines in Mind | Prerender https://prerender.io/

Dynamic Rendering | Google Search Central  |  Google Developers : https://developers.google.com/search/docs/guides/dynamic-rendering

 

Dynamic Rendering: The Solution for Visitors and Search Engines

Client-Side Rendering

When a user visits a JavaScript site, their browser downloads several files and executes the code to figure out how the page should look. This is called ​client-side rendering​ because it uses the computing power of the client device.

That’s fine for most human users, but search engine crawlers move quickly, learning as much as possible without using up their computing resources. So when they find a JavaScript site, they might ​read only a few pages and get an incomplete picture of it.

 

Server-Side Rendering

Crawlers also ​have a second queue for rendering, adding JavaScript sites to the index too late, resulting in poor rankings.

JavaScript sites can be rendered on the server side instead, but doing so is hard work for the server, can be tricky and produce slow or lost interactive elements.

 

Dynamic Rendering

That’s where ​dynamic rendering​ comes in. The server can distinguish between human and robot, giving the human the full experience and the robot a lightweight HTML version.

Dynamic rendering is preferred by search engines and comes without an SEO penalty for cloaking content, making it the clear best choice.

 

Dynamic Rendering: How It Can Lead To SEO Success - Prerender https://prerender.io/how-to-be-successful-with-dynamic-rendering-and-seo/

 

How to Be Successful With Dynamic Rendering and SEO

December 11, 2020 • Dynamic Rendering

JavaScript web pages make SEO, an already tricky field, much more complicated.

SEO is one of the more technical fields within the digital marketing space. It’s like the popular circus act where the juggler spins three plates on poles. Technical SEO is like doing that on a tightrope. JavaScript SEO is lighting the tightrope, the plates, and yourself on fire.

It’s a tricky balancing act. Not only does your website need to be formatted in a way that makes it easy for search engines to process it, but it needs to perform better and load faster than the competition.

However, the nice thing about technical SEO is it’s one of the ranking factors that you have direct control over.

How do you make your JavaScript website easy for Google to read and understand, while giving your visitors a good web experience at the same time?

The answer: Dynamic rendering.

We’ll break down what dynamic rendering is, why it’s important, why it’s beneficial for your website’s SEO health, and how to implement it.

 

 

What Happens When Google Visits Your Webpage

Google uses an automated program, known as a bot, to index and catalogue every web page on the Internet.

Google’s stated purpose is to provide the user with the best possible result for a given query. To accomplish this, it seeks to understand what content is on a given web page, and assess its relative importance to other web pages about the same topic.

Most modern web development is done with three main programming languages: HTML, CSS, and JavaScript.

Google processes HTML in two steps: crawl and index. First, Googlebot crawls the HTML on a page. It reads the text and outgoing links on a page, and parses out the keywords that help it determine what the web page is about. Then, Googlebot indexes the page.

Google, and other search engines, prefer content that’s rendered in static HTML.

With JavaScript, this process is more complicated. Rendering JavaScript comes in three stages:

  • Crawl
  • Render
  • Index

Google has to process JavaScript multiple times in order for it to fully understand the content in it. This process is known as rendering. When Google encounters JavaScript on a web page, it puts it into a queue and comes back to it once it has the resources to render it.

 

 

The Problem With JavaScript SEO

HTML is standard in web development. Search engines can render HTML-based content easily. By comparison, it’s more difficult for search engines to process Javascript. It’s resource-intensive.

What this means is that web pages based in JavaScript eat up your crawl budget. Google states that its web crawler can process JavaScript. However, this hasn’t yet been proven. It requires more resources from Google to crawl, index, and render your JavaScript pages. Other search engines such as Bing and DuckDuckGo are unable to parse JavaScript at all.

Because search engines have to use more resources to render your JavaScript pages, it’s likely many elements of your page won’t get indexed at all. Google and other search engines could skip over your metadata and canonical tags, for example, which are critical for SEO.

The thing is, Javascript provides a good user experience. It’s the reason why you’re able to make flashy websites that make your users go “Wow, that was so cool!”

How do you make a modern web experience without sacrificing your SEO?

Most developers accomplish this with server-side rendering.

 

What’s the Difference Between Client-side and Server-side Rendering?

Most JavaScript frameworks such as Angular, Vue, and React default to client-side rendering. They wait to fully load your web page’s content until they can do so within the browser on the user’s end. In other words, they render the content for humans, rather than on the server for search engines to see it.

Client-side rendering is cheaper than other alternatives. It also reduces the strain on your servers without adding more work for your developers.

However, it carries the chance of a poor user experience. For example, it adds seconds of page load time to your web pages, which can lead to a high bounce rate.

Client-side rendering affects bots as well. Googlebot uses a two-wave indexing system. It crawls and indexes the static HTML first, then crawls the JavaScript content once it has the resources to do so. This means your JavaScript content might be missed in the indexing process.

That’s bad. You need Google to see that content if you want to rank higher than your competitors and to be found by your customers.

So what’s the alternative? For most development teams, it’s server-side rendering: configuring your JavaScript so that content is rendered on your website’s own server rather than on the client-side browser.

This renders your JavaScript content in advance, making it readable for bots. SSR has performance benefits as well. Both bots and humans get faster experiences, and there’s no risk of partial indexing or missing content.

 

So, Why Doesn’t Everyone Just Use Server-Side Rendering?

If server-side rendering were easy, then every website would do it and JavaScript SEO wouldn’t be a problem. But, server-side rendering isn’t easy.

SSR is expensive, time-consuming, and difficult to execute. You need a competent web development team to put it in place.

It also tends not to work with third-party JavaScript. Websites that use server-side rendering often require external JavaScript libraries or plugins that are difficult to configure.

This is the case with Angular, which requires the Angular Universal Library to enable server-side rendering. Enabling SSR with Angular requires a lot of moving parts. If just one piece is out of place, it could confuse web crawlers and lead to a drop in your search results.

React, on the other hand, makes use of the Next.JS library to enable server-side rendering. That means your development team has to maintain an additional server at an extra cost.

So how do you make frameworks like React SEO friendly to please your customers and search engines? The solution is dynamic rendering. 

 

What is Dynamic Rendering?

Dynamic rendering is the process of serving content based on the user agent requesting it.

Essentially, it’s a hybrid solution that gives the best of both worlds. It provides static HTML for bots, and dynamic JavaScript for users. It gives bots a machine-readable, stripped down, text-and-link-only version of your web page that’s simple for them to scan and parse. It gives your human users the fully-rendered, fully-optimized, intended web experience that gets them to interact with your website longer.

How Do You Implement Dynamic Rendering?

Implementing dynamic rendering is a three-step process.

First, you install a dynamic renderer (let’s say Prerender), to transform your dynamic content into static HTML.

Second, you choose the user-agents you think should receive static content. In most cases, this includes search engine crawlers like Googlebot and Bingbot. There might be others, such as LinkedInbot, you also wish to include. 

If your prerendering service slows down your server or your HTTP requests increase, consider implementing a cache to store content. Next, determine if your user-agents require a desktop or mobile content. You can use dynamic serving to give them the appropriate solution.

Finally, configure your servers to deliver static HTML.

Verifying Your Configuration

Now you need to make sure that dynamic rendering is working properly. Here are a few things to check:

Mobile-Friendly Test: This is a function of Google Search Console’s suite of tools. Google made the switch to mobile-first indexing for all websites in September of 2020. In other words, Google looks at the mobile version of your website before the desktop one. Therefore it’s important your website is optimized for a mobile-first experience.

URL Inspection Tool: You need to make sure your website is properly crawled and indexed. The URL Inspection Tool will do just that.

Fetch as Google: This is what you will use to determine the effectiveness of your dynamic renderer. It allows you to make sure that individual URLs are properly submitted for indexing.

Structured Data Testing Tool: If you’re using schema markup in your website, then you’ll want to use this tool. It ensures your dynamic renderer isn’t interfering with schema markup.

When Should You Use Dynamic Rendering?

Dynamic rendering is an ideal way to fix your JavaScript SEO problems. Really, one of the biggest benefits of dynamic rendering is that it eliminates any issues related to your crawl budget while being cost-effective. And it doesn’t require advanced technical knowledge to implement.

So when should you use dynamic rendering?

Dynamic rendering is a good solution if you have a large website with lots of content that changes frequently (e.g. an e-commerce store with revolving inventory). If that’s the case, then your website requires quick and frequent indexing. Dynamic rendering will make sure that all of your pages get indexed and displayed properly in the SERPs.

It’s also beneficial for websites that rely on social media sharing, such as those with embeddable social media walls or widgets. 

Is Dynamic Rendering Cloaking?

Cloaking is the practice of serving markedly different content to search engine bots and humans. This is considered a black hat SEO tactic. While the short-term benefits of cloaking may be tempting, the potential risks are not worth it.

Dynamic rendering is not cloaking, as long as it serves the same end content to both crawlers and human users. It’s only cloaking if you serve completely different content to each.

Wrapping Up

JavaScript SEO is challenging. But there are things you can do to make it easier, and reduce the burden on your web development team and your budget.

If you want a dynamic renderer that solves all of your JavaScript SEO problems, look no further than Prerender. All you have to do is install our middleware. The rest takes care of itself. Get Google to finally work with you rather than against you.

 
 

 What Is Server Side Rendering | SSR Pros & Cons | Prerender https://prerender.io/what-is-srr-and-why-do-you-need-to-know/

What is Server-Side Rendering, and Why Do You Need to Know?

February 24, 2021 • SSR

The world of web development has changed rapidly.

Over the last fifteen years, web pages have evolved from simple HTML text to multimedia interactive experiences, elevating web development to an art. That’s like a civilization going from stone houses to space exploration in a century. 

Two of the most significant advancements in web development during this period have been the adoption of JavaScript frameworks to build web pages, and the field of Search Engine Optimization.

Ironically, JavaScript development and SEO are often at odds with each other. JavaScript makes websites fun and interesting to use, while SEO makes them available for people to find in the first place.

Server-side rendering (SSR) was created to make them both possible.

Read on to learn about what SSR is, why you should care, and how you can use it for yourself.

What is SSR?

Server-side rendering (SSR) is a method of loading your website’s JavaScript on your own server. When human users or search engine web crawlers like Googlebot request a page, the content reads as a static HTML page.

Historically, search engines have had difficulty crawling and indexing websites made using JavaScript rather than HTML.

Google indexes JavaScript-based web pages using a two-wave indexing system. When Googlebot first encounters your website, it crawls your pages and extracts all of their HTML, CSS and links, typically within a few hours..

Google then puts the JavaScript content in a queue, rendering it when it has the resources. Sometimes that takes days or weeks. During that time, your web pages are not being indexed and, therefore, not being found on Google. That’s a lot of traffic you’re missing out on. 

What’s worse, if your JavaScript pages aren’t able to be crawled and indexed properly, Google reads them as a blank screen and ranks it accordingly, which can be catastrophic to your website’s SEO health.

Google has claimed that Googlebot is able to crawl and index Javascript-based web pages just fine, but this has yet to be proven. Other search engines such as Bing, Yandex and DuckDuckGo cannot crawl JavaScript at all.

Regardless of the search engine, JavaScript presents a problem because it needs additional processing power to crawl and index, thereby eating up more of your website’s allotted crawl budget.

SSR is designed for this problem. It renders JavaScript on your own servers rather than putting the burden on the user agent, making the content fast and easily accessible when requested.

What is Client-Side Rendering, and How is it Different From Server-Side Rendering?

Client-Side Rendering (CSR) is the increasingly popular alternative to SSR.

The difference between the two is similar to ordering a prepared meal kit from a service like Blue Apron or Green Chef, or buying all the ingredients and making the meal yourself. 

Client-side rendering loads a website’s JavaScript in the user’s browser, not the website’s server. It’s ordering the prepared meal kit. 

Websites built with front-end JavaScript frameworks such as AngularReact or Vue all default to CSR. This is problematic from an SEO standpoint because when web crawlers encounter a page on your website, all they see is a blank screen. 

Server-side rendering, meanwhile, is the more traditional option; it’s buying the groceries and cooking the meal yourself. It loads your JavaScript content on your website’s server. 

SSR dates back to the time when JavaScript and PHP were primarily backend technologies, and Java was used simply to make HTML-based websites more interactive rather than building them from scratch. 

SSR converts your HTML files into information that’s readable for the user-end browser. Googlebot can see the basic HTML content on your web page without JavaScript in the way, while the user sees the fully-rendered page in all its glory. Your website is ranked properly on Google, and your user is treated to a web experience that’s a feast for the eyes and ears.

Advantages of Server-Side Rendering

We’ve already discussed some of the SEO benefits of server-side rendering: flawlessly crawled and indexed JavaScript pages, no more wasted crawl budgets or plummeting search rankings, no sluggish two-wave indexing process; just smooth, seamless indexation and the steady stream of Google traffic that comes with it.

SSR has even more advantages than the ones above. 

It optimizes web pages for social media, not just search engines. When someone shares your page on Facebook or Twitter, the post includes a preview of the page.

It comes with a number of performance benefits that improve your website’s UX. SSR pages have much faster load time and a much faster first contentful paint, because the content is available in the browser sooner. That means less time your user has to look at a loading screen. 

JavaScript is resource-heavy and code-intensive. Downloading it onto a browser using CSR contributes significantly to page weight. A single JavaScript file averages out to about 1MB, whereas web development best practice advises keeping the entire page under 5MB max. 

The performance enhancements that come with SSR also have their own SEO benefits. Google gives preferential search rankings to the sites with the fastest page load speed. Faster load times improve user metrics such as session duration and bounce rate; Google algorithms look at these metrics and give you an extra SEO bost.

Faster web pages. Happy search engines. Happy user.

Server-Side Rendering Disadvantages

If SSR is so much more technically well-optimized and SEO-friendly, why don’t all websites use it?

Turns out, using SSR for your website does come with some significant drawbacks. It’s expensive, difficult to implement and requires a lot of manpower to set up. 

It also puts the burden of rendering your JavaScript content on your own servers, which will rack up your server maintenance costs.

Websites that use JavaScript frameworks need universal libraries to enable SSR; Angular requires Angular Universal, React and Vue need Next.JS. All of them require additional work from your engineering team, which costs you money.

SSR pages will have a higher TTFB latency and a slower time-to-interactive. Your user will see the content sooner, but if they click on something, nothing will happen. They’ll get frustrated and leave.

SSR is not a fix-all solution. You need to assess your website’s technical needs and challenges before putting it in place.

There’s a Better Solution Still: Prerendering

SSR has a lot of benefits that compensate for the technical deficiencies and deteriorated user experience of CSR. However, it has its own limitations and may not be the best solution for your website.

Prerendering is a great option that combines improved performance and indexation with ease-of-setup and implementation. It’s cost-effective, scalable and even recommended by Google’s own documentation.

To give users and search engines a fast and well-optimized web experience, sign up for Prerender for free today

 

 

https://mp.weixin.qq.com/s/4SLlA0mJENvpPZQSi8-U2g

基于next.js的服务端渲染解决方案

 

 总结:SPA的设计架构在不需要关注SEO的情况下,还是很方便,便捷开发,快速迭代,前后端完全解耦,在专题页以及现在的Hybrid应用,等场景是特别适用的。

对于面向C端用户的界面来说,性能优化一直是一个永恒的问题,随着前端技术的发展,针对性能每个时间段都会有相应的更加优秀的解决方案,基于React,Vue的服务端渲染无疑是当代最先进的一种,Next.js就是基于React的一个侧重于服务端渲染的开源框架。

在58同城汽车业务中对Next.js的应用中,我们在Next.js中加入了我们对业务的思考,演变成CarNext 的定制化方案,服务了我们更多的汽车相关的业务,希望本文能给读者带来一些思考。

 

背景

前后端分离是Web架构基础上的进一步演化,是Web应用程序交互逻辑和业务复杂度背景下的一种趋势,前后端必然需要去解耦。针对前后端的耦合点来说,主要集中在数据接口和HTML渲染。数据接口随着ajax的发展,已经可以现在前后端分离,不需要在进入页面的时候后端去调接口渲染页面。而针对HTML渲染,回顾之前的Web架构,我们过去有SSR方案,这个是完全由后端完成渲染,耦合程度很高,不适合现在的开发方式。随着React,Vue框架的发展,采用CSR的SPA是一种极端的实施成本最低的完全解耦方式,这种方式的html,css,js等静态资源可以放到静态服务器上,接口是唯一的前端和后端交互的媒介,这种方案也存在着一些问题。最后是node作为中间层为前后端同构JavaScript编程提供可行性,可以补充SPA的一些不足和缺陷。

 

前后端分离架构对比

基于SPA的架构设计

首先我们通过一个流程图来看一下SPA的工作流程

 

 

SPA工作流程

通过上图,我们可以看出SPA的设计基本上前端和后端的耦合点就是接口。因为这样的设计,所以导致整个架构师完全和后端解耦的,极大地提高了开发效率。

第一点,对于一个项目而言,前后端只需要在开发前约定一个接口的规范和数据格式,就可以单独的去开发各自的功能,最后只需要联调一下接口的数据就可以各自的上线,将渲染完全从服务端进行剥离。

第二点,与每个页面都需要同服务端获取数据的传统web网站对比,SPA的开发方式可以建立前端路由策略来对子页面进行统筹的管理,路由的跳转可以转化成不同的组件渲染和不同函数的调用,后端只需要考虑接口的设计,不需要管理页面的路由。

这种架构设计同样有不可避免的两大缺点:

一、采用SPA就需要舍弃SEO,SEO爬虫软件通过分析网站的HTML文档抓取信息,从开始就没有吧JavaScript考虑在内,即使现在的SPA的网站日趋流行,也仅有谷歌爬虫初步支持了SPA,但是也需要在编写时进行特殊处理。

二、因为SPA需要等待静态资源的加载完成再请求服务器的接口获取首屏数据,最后再渲染HTML,这段时间用户看到的也都是白屏的页面,这样也需要开发者从性能上需要花费更多的精力。

针对上面两个缺陷,下面在项目实践中给出解决方案:

白屏时间过长,首先我们可以不需要渲染整屏的数据,只需要渲染首屏的数据就可以,那针对不是首屏的页面,我们可以采用懒加载。针对首屏的页面,我们可以采用骨架屏来设计。通过页面的“骨架”来取代空白的页面,让用户优先得到视觉反馈,减少用户耐心的消耗。

 

 

 

骨架屏图解

针对SEO,业界也有一套适合的解决方案,但是针对实施成本,需要综合考虑。

 

 

SEO解决方案

总结:SPA的设计架构在不需要关注SEO的情况下,还是很方便,便捷开发,快速迭代,前后端完全解耦,在专题页以及现在的Hybrid应用,等场景是特别适用的。

 

以Node作为中间层的架构设计

在Web服务端与浏览器客户端之间搭建Node中间渲染层也是一种前后端分离架构设计,这种架构方案,与SPA模式相同的是,接口仍然是前端和服务端的唯一的媒介,但是这种架构可以在node层做接口的代理和整合,以及路由的设计和HTML的渲染。以二手车达尔文项目为例:

 

 

 

Node作为中间层的架构设计

针对上面这样的架构设计,可以发现几个优点:

一、由之前的前端到服务端发起请求可以转成前端通过node发起服务端请求,因为接口是node发起的,可以不用服务端做跨域的设置了,另外,我们对于一些不同部门的不同接口,但是在前端来说都是来处理同一个功能的话,我们可以在node端做一层封装,暴露给前端的也只是整合之后的接口。

二、因为HTML的渲染和模板都是由node来处理,所以也完成了前后端的解耦,后端只需要负责接口的编写。

缺点也同样明显:

因为这样的架构设计,对node中间层来说,所有的功能其实也可以由服务端来完成,我们把模板放到服务端,只是处理页面的内容。

总结:针对独立项目来说,类似于后台管理系统,我们可以通过这样的架构来完成前后端的解耦。

a) 开发阶段,我们只需要和后端完成接口的联调,路由和页面的渲染都是前端进行处理节省了沟通成本,提高了开发效率。

b)后续维护阶段,如果前端更改了模板文件路径,或者要增加第三方库文件,也不需要找后端来上线,我们只需要自己上线node服务就可以,对于后续的迭代来说也是一个更好的架构设计。

c) 这种设计同样只适用于不注重SEO的网站,比较适合用户中心,后台管理系统等。

 

基于Node同构JavaScript的架构设计

同构js的目的是为了能让js编写的代码既可以在浏览器端渲染又可以在服务器端工作。对于之前的js来说很难去脱离对DOM以及BOM的操作,这样很难去让js的代码去运行在浏览器端。而对于现在的React、Vue框架来说视图层是由数据层来控制的,被浏览器解析前的HTML文档实际上是没有平台属性的文本,Node层可以去拿到js解析后的字符串去处理,完成和客户端的同构。以React框架来说,具体的方案如下:

 

 

 

基于React的同构图解

这种架构的设计让Node作为中间层并非让Node去替代Java去渲染HTML,而是作为同构JavaScript作为支持,这种方案的好处在于既可以对于SEO友好,把HTML渲染到模板上,便于SEO爬虫软件的分析。对于首屏渲染来说,也是不错的体验,让用户能直接看到首屏的信息。

总结:对于这样的一个同构方案来说很适合构建一些用户体验高,有SEO要求的页面,例如我们二手车的M端一些主要站点,很适合这样的一个架构。对于这样既可以兼顾SEO,又可以采用现在主流的前端开发框架来说,是一个不错的选择。

1)这样的架构使用react或者vue可以提高我们的开发效率,,减少dom操作的频率,提高页面性能。

2)开发出来的项目不仅支持SEO,同样有很好用户体验。

 

搭建服务端渲染会面临的问题

如果我们自己去搭建这样一套服务端渲染框架会面临着以下几个问题:

a) 我们需要自己去处理前后端路由相同匹配的问题

b)  需要去处理客户端以及服务端在同构时,redux数据统一的问题

c) 当如果有首屏数据需要去拿服务端的数据来进行页面的渲染,如果服务端发起请求拿到数据后直接渲染页面,而不用等到客户端来加载之后才渲染页面,对于用户来说可以很大的提高用户体验,也不会出现白屏现象,这样同样需要我们自己搭建服务端框架需要考虑的问题.

解决方案:基于Next.js的CarNext架构设计

对于前端项目来说,如果能使用前后端同构这样的架构设计的话,是一个很合适的方案,通过调研,Next.js是基于react的ssr解决方案,解决了ssr同构方面的问题,我们只需要简单的二次封装就定制符合业务的服务端渲染框架。

Next.js会有以下特点:

1.后端数据通信处理

 

 

Next.js的数据处理

2.路由处理

Next.js内置了路由组件,并进行了路由的封装,会读取 /pages目录下面的文件来动态生成路由。

3. 提供了丰富扩展性,可以扩展Bable, Webpack ,express ,koa 等

有了这样一个成熟的服务端框架,基于Next.js基础上设计了一个CarNext框架,更好的帮助我们业务进行开发。

1) 数据管理:对于react组件间数据通信来说,通常是采用redux来管理数据。

基于Next.js可以通过HOC来封装redux作为数据管理

 

 

Redux处理

采用HOC这种设计模式:

a)可以简化App组件的代码,把需要处理redux相关的逻辑放到HOC里面,实现组件的逻辑分离。

b)如果前端和后端共同需要维护全局数据的状态,这个时候就需要在redux里面维持同一个数据状态。下图依照登录逻辑来说明一下:

 

 

同步store

通过上图的例子,当客户端同构的时候,直接可以从store拿到服务端处理的登录信息,不需要客户端来处理登录这部分逻辑,但是客户端只是使用登录信息,完成应用的其他模块的开发。这样打通了数据方面的共享。

代码如下:

 

 2)proxy接口处理,在我们开发项目中可能会遇到跨域以及其他前后端通信的 问题,CarNext内置了一个代理方案,可以把客户端对服务端的请求转成CarNext来转发客户端的请求,来避免跨域等问题。代码如下:

 

 

通过上图,可以看出,如果以api为开头的请求可以通过CarNext来转发,不是的话直接执行next。

3)CarNext错误日志处理,我们在CarNext里面使用了koa-json-error中间件,通过联通mongodb,把项目中的一些错误日志都存到数据库里面,当遇到服务出现问题的时候能够及时的定位到问题。代码如下:

 

 

同样定义路由来查询错误日志:

 

 

 

4)其他的一些css,sass的设置,直接配置样式

 

 

 

 可以通过下面来看出CarNext整体设计框架:

 

 

 

CarNext在项目中的应用:

1.在线上项目中,使用CarNext对首页的渲染比使用客户端开发提高了1倍的渲染速度。

2.线上项目中,在安卓的webview里面跨域请求后端接口存在丢失cookie情况,当时可以使用CarNext提供的接口代理来使用,因为CarNext是不会有跨域问题,并且还是直接node去访问。

3.项目中当时项目组同学使用node去请求服务端内网地址而代替去请求外网大大的减少了整个数据通信时间。

 

 

内外网请求的对比

4. 通过查询错误日志当时很快的去定位到错误情况

 

 

 

总结与规划

通过上文所述,如果是活动页或者Hybrid功能页,SPA这样的设计来说是一个合适的方案选择,如果是后台管理系统可以选择以node作为中间层来渲染,对于注重SEO和首屏渲染速度来说,JS的同构设计方案则更加适合。

对于二手车业务线来说,主要还是前端项目,如何提高我们的开发效率以及给用户带来良好的页面体验,以及SEO的优化,是我们必须要一直考虑的问题,CarNext框架的封装就是为了解决实际的问题而设计的,希望能够为我们的前端开发赋能。

 

作者简介:沈大为:58同城汽车事业群资深前端工程师。

 

 

 vue单页应用seo解决方案rendertron - URCloud https://www.urcloud.co/archives/109/

 

使用rendertron+nginx做spa应用的seo_微擎百科 https://www.w7.wiki/develop/4205.html

使用rendertron+nginx做spa应用的seo

1.克隆源代码,进入源代码根目录,下载相关依赖包

git clone https://github.com/GoogleChrome/rendertron.git

cd rendertron

npm install

 

2.编译源代码

npm run build

在install的过程中,会自动安装一个Chromium浏览器,但是不能正常启动,原因是缺少相关依赖

3.安装Chromium的依赖:

 

 

 

yum install pango.x86_64 libXcomposite.x86_64 libXcursor.x86_64 libXdamage.x86_64 libXext
.x86_64 libXi.x86_64 libXtst.x86_64 cups-libs.x86_64 libXScrnSaver.x86_64 libXrandr.x86_64 GConf2.x86_64 alsa-lib.x86_64
atk.x86_64 gtk3.x86_64 -y

 

 

 

4.安装pm2

npm install pm2 -g

 

5.使用pm2持久化运行rendertron

pm2 start build/rendertron.js
 
测试一下

 

curl localhost:3000/render/http://www.xxx.com

 

 
如果输出有内容的html代码说明运行成功

6.在nginx配置文件内增加:

#seo代理 
location / {
  try_files $uri @prerender; 
} 
location @prerender {
  set $prerender 0;
  if ($http_user_agent ~* "googlebot|bingbot|yandex|baiduspider|twitterbot|facebookexternalhit|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest|slackbot|vkShare|W3C_Validator") {
    set $prerender 1; 
  }
  if ($args ~ "_escaped_fragment_") {
    set $prerender 1; 
  } 
  if ($http_user_agent ~ "Prerender") {
    set $prerender 0; 
  } 
  if ($uri ~* "\.(js|css|xml|less|png|jpg|jpeg|gif|pdf|doc|txt|ico|rss|zip|mp3|rar|exe|wmv|doc|avi|ppt|mpg|mpeg|tif|wav|mov|psd|ai|xls|mp4|m4a|swf|dat|dmg|iso|flv|m4v|torrent|ttf|woff|svg|eot)") {
    set $prerender 0; 
  } 
  #resolver kube-dns.kube-system.svc.cluster.local valid=5s; 
  if ($prerender = 1) {
    set $prerender "rendertron"; 
    # rewrite .* /$scheme://$host$request_uri? break; 
    proxy_pass http://127.0.0.1:3000/render/$scheme://$host$request_uri?wc-inject-shadydom=true;
  }
  if ($prerender = 0) {
    rewrite .* /index.html break;
  }
}
#seo代理-END

 

配置完重启nginx,检验配置是否成功:

 

curl -A “Baiduspider” http://website.app.dev.supermanapp.cn

 

如果输出有内容的html,就是成功了

现在越来越多的前端项目是基于vue,react,angular做的单页应用,可是单页应用的一大痛点就是seo不友好,虽然百度统计支持单页应用,但是seo不行呀。

可行解决方案

目前有三种解决方案。方案一预渲染,方案二服务端渲染,方案三针对seo的渲染。方案一和方案二都会原有项目进行改动,其中方案二改动方案最大。所以目前来说,改动成本最小的就是方案三了。本文采用的方式就是针对seo渲染的方案三。

rendora和rendertron

目前有rendora和rendertron两种主流开源方案,rendora是基于go语言的,支持docker部署。rendertron是由google团队基于node开发的,背靠大树。从目前的star数以及更新状况来看,rendertron要更好些。rendora已经一年多没有更新,并且安装过程中有些bug还无法解决。

rendertron原理

rendertron会启动一个node项目,对请求进行拦截,如果是搜索引擎发来的请求,就会被拦截,通过puppeteer渲染想要请求的地址,并将渲染好的html返回给搜索引擎,而普通用户的请求还是转发到对应的后端服务器上

环境技术栈

操作系统ubuntu18.04
前端vue
后端接口nestjs
服务器nginx
seo预渲染rendertron

步骤一:rendertron安装启动

git clone https://github.com/GoogleChrome/rendertron.git
cd rendertron
npm install
npm run build
npm run start

rendertron在启动过程中可能会报错,原因是puppeteer的环境没有安装完,需要参考puppeteer官方文档安装对应的包chrome-headless-doesnt-launch-on-unix
如果启动没有问题了,之后可以通过pm2加入后台进程

步骤二:配置nginx拦截搜索引擎

  location / {
        if ($http_user_agent ~* "bot|bing|yandex|duckduckgo|baiduspider|googlebot|360spider|sogou spider") {
            rewrite ^/(.*) /render/https://sz.urcloud.co/$1 break;
            proxy_pass http://127.0.0.1:3001;  #rendertron端口
        }
        try_files $uri $uri/ /index.html; 
    }
    location /api {
        proxy_pass http://127.0.0.1:3000; #后端api接口
    }

步骤三:测试搜索引擎访问和普通用户访问

通过postman在请求header中添加User-Agent:baiduspider来测试蜘蛛访问和普通用户访问,如果正常的话,可以看到搜索引擎访问返回的是渲染好的html页面。

后续说明

可以看到这样改只是修改了nginx配置以及新增了一个rendertron项目,对原有项目没有任何改动,rendertron官方是有node,python等后端语言的插件的,通过nginx转发到后端,在后端项目里面判断是否搜索引擎来决定是否转发url到rendertron项目下。这样会更好,不过对原有项目做了一些小的调整,感兴趣的同学可以去研究一下。

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

posted @ 2021-07-01 13:49  papering  阅读(272)  评论(0编辑  收藏  举报