wasm示例 js canvas 动画示例

利用wasm绘制图的一些参考:

fhtr.org/gravityring/sprites.html

用Canvas + WASM画一个迷宫 - 知乎 (zhihu.com)

WebGL 重置画布尺寸 (webglfundamentals.org)

 


 canvaskit demo 

https://demos.skia.org/demo/

src\third_party\skia\demos.skia.org\Makefile

根据它可以本地启动示例:python -m SimpleHTTPServer 8123

访问:http://localhost:8123/demos/hello_world/index.html

如果需要下载的js访问不到,替换成本地的:   <script type="text/javascript" src="https://unpkg.com/canvaskit-wasm@latest/bin/full/canvaskit.js"></script>

 

 

skp 在线的背后源码:

E:\dev\chromium96\src\third_party\skia\modules\canvaskit\wasm_tools

目录:
E:\dev\chromium96\src\third_party\skia\experimental\wasm-skp-debugger
E:\dev\chromium96\src\third_party\skia\tools\debugger

关联:
E:\dev\chromium96\src\third_party\skia\experimental\wasm-skp-debugger\debugger_bindings.cpp

#include "tools/debugger/DebugCanvas.h"

#include "tools/debugger/DebugLayerManager.h"

 

debuggerz网站源码???:

third_party/skia/modules/canvaskit/debugger_bindings.cpp

这里封装了 供js去调用

class SkpDebugPlayer {
  public:
    SkpDebugPlayer() :
    udm(UrlDataManager(SkString("/data"))){}
。。。。。。

}
View Code

类里面的方法

    /* loadSkp deserializes a skp file that has been copied into the shared WASM memory.
     * cptr - a pointer to the data to deserialize.
     * length - length of the data in bytes.
     * The caller must allocate the memory with M._malloc where M is the wasm module in javascript
     * and copy the data into M.buffer at the pointer returned by malloc.
     *
     * uintptr_t is used here because emscripten will not allow binding of functions with pointers
     * to primitive types. We can instead pass a number and cast it to whatever kind of
     * pointer we're expecting.
     *
     * Returns an error string which is populated in the case that the file cannot be read.
     */
    std::string loadSkp(uintptr_t cptr, int length) {
      const uint8_t* data = reinterpret_cast<const uint8_t*>(cptr);
      // Both traditional and multi-frame skp files have a magic word
      SkMemoryStream stream(data, length);
      SkDebugf("make stream at %p, with %d bytes\n",data, length);
      const bool isMulti = memcmp(data, kMultiMagic, sizeof(kMultiMagic) - 1) == 0;


      if (isMulti) {
        SkDebugf("Try reading as a multi-frame skp\n");
        const auto& error = loadMultiFrame(&stream);
        if (!error.empty()) { return error; }
      } else {
        SkDebugf("Try reading as single-frame skp\n");
        // TODO(nifong): Rely on SkPicture's return errors once it provides some.
        frames.push_back(loadSingleFrame(&stream));
      }
      return "";
    }
View Code

 

 js的调用:src\third_party\skia\experimental\wasm-skp-debugger\tests\startup.spec.js

    it('can load and draw a skp file on a Web GL canvas', function(done) {
        LoadDebugger.then(catchException(done, () => {
            const surface = Debugger.MakeWebGLCanvasSurface(
                document.getElementById('debugger_view'));

            fetch('/debugger/sample.skp').then(function(response) {
                // Load test file
                if (!response.ok) {
                  throw new Error("HTTP error, status = " + response.status);
                }
                response.arrayBuffer().then(function(buffer) {
                    const fileContents = new Uint8Array(buffer);
                    console.log('fetched /debugger/sample.skp');
                    const player = Debugger.SkpFilePlayer(fileContents);
                    // Draw picture
                    player.drawTo(surface, 789); // number of commands in sample file
                    surface.flush();

                    console.log('drew picture to canvas element');
                    surface.dispose();
                    done();
                });
              });
        }));
    });
View Code

 

third_party/skia/tools/debugger/DebugCanvas.h

画多个skp,背景透明问题

debugger_bindings.cpp中的代码:

    /* drawTo asks the debug canvas to draw from the beginning of the picture
     * to the given command and flush the canvas.
     */
    void drawTo(SkSurface* surface, int32_t index) {
      // Set the command within the frame or layer event being drawn.
      if (fInspectedLayer >= 0) {
        fLayerManager->setCommand(fInspectedLayer, fp, index);
      } else {
        index = constrainFrameCommand(index);
      }

      auto* canvas = surface->getCanvas();
      canvas->clear(SK_ColorTRANSPARENT);
      if (fInspectedLayer >= 0) {
        // when it's a layer event we're viewing, we use the layer manager to render it.
        fLayerManager->drawLayerEventTo(surface, fInspectedLayer, fp);
      } else {
        // otherwise, its a frame at the top level.
        frames[fp]->drawTo(surface->getCanvas(), index);
      }
      surface->flush();
    }

    // Draws to the end of the current frame.
    void draw(SkSurface* surface) {
      auto* canvas = surface->getCanvas();
      canvas->clear(SK_ColorTRANSPARENT);
      frames[fp]->draw(surface->getCanvas());
      surface->getCanvas()->flush();
    }

 

    /**
        Executes all draw calls to the canvas.
        @param canvas  The canvas being drawn to
     */
    void draw(SkCanvas* canvas);

    /**
        Executes the draw calls up to the specified index.
        Does not clear the canvas to transparent black first,
        if needed, caller should do that first.
        @param canvas  The canvas being drawn to
        @param index  The index of the final command being executed
        @param m an optional Mth gpu op to highlight, or -1
     */
    void drawTo(SkCanvas* canvas, int index, int m = -1);

对上面头文件的实现:third_party/skia/tools/debugger/DebugCanvas.cpp

void DebugCanvas::drawTo(SkCanvas* originalCanvas, int index, int m) {
    SkASSERT(!fCommandVector.isEmpty());
    SkASSERT(index < fCommandVector.count());

    int saveCount = originalCanvas->save();

    originalCanvas->resetMatrix();
    SkCanvasPriv::ResetClip(originalCanvas);

    DebugPaintFilterCanvas filterCanvas(originalCanvas);
    SkCanvas* finalCanvas = fOverdrawViz ? &filterCanvas : originalCanvas;

#if SK_GPU_V1
    auto dContext = GrAsDirectContext(finalCanvas->recordingContext());

    // If we have a GPU backend we can also visualize the op information
    GrAuditTrail* at = nullptr;
    if (fDrawGpuOpBounds || m != -1) {
        // The audit trail must be obtained from the original canvas.
        at = this->getAuditTrail(originalCanvas);
    }
#endif

    for (int i = 0; i <= index; i++) {
#if SK_GPU_V1
        GrAuditTrail::AutoCollectOps* acb = nullptr;
        if (at) {
            // We need to flush any pending operations, or they might combine with commands below.
            // Previous operations were not registered with the audit trail when they were
            // created, so if we allow them to combine, the audit trail will fail to find them.
            if (dContext) {
                dContext->flush();
            }
            acb = new GrAuditTrail::AutoCollectOps(at, i);
        }
#endif
        if (fCommandVector[i]->isVisible()) {
            fCommandVector[i]->execute(finalCanvas);
        }
#if SK_GPU_V1
        if (at && acb) {
            delete acb;
        }
#endif
    }

    if (SkColorGetA(fClipVizColor) != 0) {
        finalCanvas->save();
        SkPaint clipPaint;
        clipPaint.setColor(fClipVizColor);
        finalCanvas->drawPaint(clipPaint);
        finalCanvas->restore();
    }

    fMatrix = finalCanvas->getLocalToDevice();
    fClip   = finalCanvas->getDeviceClipBounds();
    if (fShowOrigin) {
        const SkPaint originXPaint = SkPaint({1.0, 0, 0, 1.0});
        const SkPaint originYPaint = SkPaint({0, 1.0, 0, 1.0});
        // Draw an origin cross at the origin before restoring to assist in visualizing the
        // current matrix.
        drawArrow(finalCanvas, {-50, 0}, {50, 0}, originXPaint);
        drawArrow(finalCanvas, {0, -50}, {0, 50}, originYPaint);
    }
    finalCanvas->restoreToCount(saveCount);

    if (fShowAndroidClip) {
        // Draw visualization of android device clip restriction
        SkPaint androidClipPaint;
        androidClipPaint.setARGB(80, 255, 100, 0);
        finalCanvas->drawRect(fAndroidClip, androidClipPaint);
    }

#if SK_GPU_V1
    // draw any ops if required and issue a full reset onto GrAuditTrail
    if (at) {
        // just in case there is global reordering, we flush the canvas before querying
        // GrAuditTrail
        GrAuditTrail::AutoEnable ae(at);
        if (dContext) {
            dContext->flush();
        }

        // we pick three colorblind-safe colors, 75% alpha
        static const SkColor kTotalBounds     = SkColorSetARGB(0xC0, 0x6A, 0x3D, 0x9A);
        static const SkColor kCommandOpBounds = SkColorSetARGB(0xC0, 0xE3, 0x1A, 0x1C);
        static const SkColor kOtherOpBounds   = SkColorSetARGB(0xC0, 0xFF, 0x7F, 0x00);

        // get the render target of the top device (from the original canvas) so we can ignore ops
        // drawn offscreen
        GrRenderTargetProxy* rtp = SkCanvasPriv::TopDeviceTargetProxy(originalCanvas);
        GrSurfaceProxy::UniqueID proxyID = rtp->uniqueID();

        // get the bounding boxes to draw
        SkTArray<GrAuditTrail::OpInfo> childrenBounds;
        if (m == -1) {
            at->getBoundsByClientID(&childrenBounds, index);
        } else {
            // the client wants us to draw the mth op
            at->getBoundsByOpsTaskID(&childrenBounds.push_back(), m);
        }
        // Shift the rects half a pixel, so they appear as exactly 1px thick lines.
        finalCanvas->save();
        finalCanvas->translate(0.5, -0.5);
        SkPaint paint;
        paint.setStyle(SkPaint::kStroke_Style);
        paint.setStrokeWidth(1);
        for (int i = 0; i < childrenBounds.count(); i++) {
            if (childrenBounds[i].fProxyUniqueID != proxyID) {
                // offscreen draw, ignore for now
                continue;
            }
            paint.setColor(kTotalBounds);
            finalCanvas->drawRect(childrenBounds[i].fBounds, paint);
            for (int j = 0; j < childrenBounds[i].fOps.count(); j++) {
                const GrAuditTrail::OpInfo::Op& op = childrenBounds[i].fOps[j];
                if (op.fClientID != index) {
                    paint.setColor(kOtherOpBounds);
                } else {
                    paint.setColor(kCommandOpBounds);
                }
                finalCanvas->drawRect(op.fBounds, paint);
            }
        }
        finalCanvas->restore();
        this->cleanupAuditTrail(at);
    }
#endif
}
View Code

third_party/blink/renderer/modules/canvas/canvas2d/base_rendering_context_2d.cc

double y,
double width,
double height,
bool for_reset
 
Webgl初始参数:
/**
 * Options for configuring a WebGL context. If an option is omitted, a sensible default will
 * be used. These are defined by the WebGL standards.
 */
export interface WebGLOptions {
    alpha?: number;
    antialias?: number;
    depth?: number;
    enableExtensionsByDefault?: number;
    explicitSwapControl?: number;
    failIfMajorPerformanceCaveat?: number;
    majorVersion?: number;
    minorVersion?: number;
    preferLowPowerToHighPerformance?: number;
    premultipliedAlpha?: number;
    preserveDrawingBuffer?: number;
    renderViaOffscreenBackBuffer?: number;
    stencil?: number;
}
View Code

 


 surface 与 canvas示例:

C:\dev\skia_source\modules\canvaskit\npm_build\multicanvas.html

C:\dev\skia_source\modules\canvaskit\npm_build\types\canvaskit-wasm-tests.ts

 

一个 surfaceTests js代码:

function surfaceTests(CK: CanvasKit, gl?: WebGLRenderingContext) {
    if (!gl) {
        return;
    }
    const canvasEl = document.querySelector('canvas') as HTMLCanvasElement;
    const surfaceOne = CK.MakeCanvasSurface(canvasEl)!; // $ExpectType Surface
    const surfaceTwo = CK.MakeCanvasSurface('my_canvas')!;
    const surfaceThree = CK.MakeSWCanvasSurface(canvasEl)!; // $ExpectType Surface
    const surfaceFour = CK.MakeSWCanvasSurface('my_canvas')!;
    const surfaceFive = CK.MakeWebGLCanvasSurface(canvasEl, // $ExpectType Surface
        CK.ColorSpace.SRGB, {
        majorVersion: 2,
        preferLowPowerToHighPerformance: 1,
    })!;
    const surfaceSix = CK.MakeWebGLCanvasSurface('my_canvas', CK.ColorSpace.DISPLAY_P3, {
        enableExtensionsByDefault: 2,
    })!;
    const surfaceSeven = CK.MakeSurface(200, 200)!; // $ExpectType Surface
    const m = CK.Malloc(Uint8Array, 5 * 5 * 4);
    const surfaceEight = CK.MakeRasterDirectSurface({
        width: 5,
        height: 5,
        colorType: CK.ColorType.RGBA_8888,
        alphaType: CK.AlphaType.Premul,
        colorSpace: CK.ColorSpace.SRGB,
    }, m, 20);

    surfaceOne.flush();
    const canvas = surfaceTwo.getCanvas(); // $ExpectType Canvas
    const ii = surfaceThree.imageInfo(); // $ExpectType ImageInfo
    const h = surfaceFour.height(); // $ExpectType number
    const w = surfaceFive.width(); // $ExpectType number
    const subsurface = surfaceOne.makeSurface(ii); // $ExpectType Surface
    const isGPU = subsurface.reportBackendTypeIsGPU(); // $ExpectType boolean
    const count = surfaceThree.sampleCnt(); // $ExpectType number
    const img = surfaceFour.makeImageSnapshot([0, 3, 2, 5]); // $ExpectType Image
    const img2 = surfaceSix.makeImageSnapshot(); // $ExpectType Image
    const img3 = surfaceFour.makeImageFromTexture(gl.createTexture()!, {
      height: 40,
      width: 80,
      colorType: CK.ColorType.RGBA_8888,
      alphaType: CK.AlphaType.Unpremul,
      colorSpace: CK.ColorSpace.SRGB,
    });
    const img4 = surfaceFour.makeImageFromTextureSource(new Image()); // $ExpectType Image | null
    const videoEle = document.createElement('video');
    const img5 = surfaceFour.makeImageFromTextureSource(videoEle, {
      height: 40,
      width: 80,
      colorType: CK.ColorType.RGBA_8888,
      alphaType: CK.AlphaType.Unpremul,
    });
    const img6 = surfaceFour.makeImageFromTextureSource(new ImageData(40, 80)); // $ExpectType Image | null

    surfaceSeven.delete();

    const ctx = CK.GetWebGLContext(canvasEl); // $ExpectType number
    CK.deleteContext(ctx);
    const grCtx = CK.MakeGrContext(ctx);
    const surfaceNine = CK.MakeOnScreenGLSurface(grCtx!, 100, 400, // $ExpectType Surface
        CK.ColorSpace.ADOBE_RGB)!;

    const rt = CK.MakeRenderTarget(grCtx!, 100, 200); // $ExpectType Surface | null
    const rt2 = CK.MakeRenderTarget(grCtx!, { // $ExpectType Surface | null
        width: 79,
        height: 205,
        colorType: CK.ColorType.RGBA_8888,
        alphaType: CK.AlphaType.Premul,
        colorSpace: CK.ColorSpace.SRGB,
    });

    const drawFrame = (canvas: Canvas) => {
        canvas.clear([0, 0, 0, 0]);
    };
    surfaceFour.requestAnimationFrame(drawFrame);
    surfaceFour.drawOnce(drawFrame);
}
View Code

 

示例,在canvas上做个imageInfo,获取到改变尺寸的surface,看看它是不是gpu。c++

void draw(SkCanvas* canvas) {
    sk_sp<SkSurface> surface = SkSurface::MakeRasterN32Premul(5, 6);
    SkCanvas* smallCanvas = surface->getCanvas();
    SkImageInfo imageInfo = SkImageInfo::MakeN32Premul(10, 14);
    sk_sp<SkSurface> compatible = smallCanvas->makeSurface(imageInfo);
    SkDebugf("compatible %c= nullptr\n", compatible == nullptr ? '=' : '!');
    SkDebugf("size = %d, %d\n", compatible->width(), compatible->height());

???js:
const isGPU = subsurface.reportBackendTypeIsGPU(); // $ExpectType boolean
}

一个给canvaskit 报告的bug 写的测试示例,这个bug已经关闭:

let htmlCanvas;
let skCanvas;
let skSurface;
const paint = new CanvasKit.Paint();

function getCanvasLayer(w,h) {
  htmlCanvas = document.getElementById("canvas");
  console.log("Canvas class: %s", htmlCanvas.constructor.name);
  htmlCanvas.height = h;
  htmlCanvas.width = w;
}

function prepareSurface(w, h) {
  if (skSurface && !skSurface.isDeleted()) {
    skSurface.dispose();
    console.log('Disposed surface');
  }
  const context = htmlCanvas.getContext("2d");
  skSurface = CanvasKit.MakeWebGLCanvasSurface(htmlCanvas);
  if (!skSurface) {
    console.log('Failed to make surface');
  }
}

function drawOffscreenCanvas(skps, w,h) {
  let picture = CanvasKit.MakePicture(skps);
  skCanvas = skSurface.getCanvas();
  skCanvas.save();
  skCanvas.drawPicture(picture);
  skCanvas.restore();
  picture.delete();
}

function flushOffscreenCanvas(w,h) {
  skSurface.flush();

  // Here is something interesting, remove line 19 and call line 20, after MakeWebGLCanvasSurface(htmlCanvas)
  // htmlCanvas.getContext("2d") returns null context.
  // htmlCanvas.getContext("webgl") returns null context.
  // htmlCanvas.getContext("webgl2") return a valid WebGL2RenderingContext.
  // Now if we move this getContext before MakeWebGLCanvasSurface, all 3
  // context "2d", "webgl" and "webgl2" return a valid context but 
  // MakeWebGLCanvasSurface will throw an error:
  // Uncaught (in promise) TypeError: Cannot read property 'version' of undefined

  //const context = htmlCanvas.getContext("webgl");
  //console.log("Context class: %s", context.constructor.name);
}

function drawFrame(skps) {
  const canvasLayerWidth = 3000;
  const canvasLayerHeight = 3000;
  const w = 1000;
  const h = 1000;
  getCanvasLayer(canvasLayerWidth,canvasLayerHeight);
  prepareSurface(w,h);
  drawOffscreenCanvas(skps,w,h);
  flushOffscreenCanvas(w,h); 
}

fetch('url', {
  'mode': 'cors'
})
  .then(response => response.blob())
  .then(blob => blob.arrayBuffer())
  .then(skpic => {
      console.log(skpic)
      drawFrame(skpic);
  });
View Code

 


msn 搜索:mdn canvas

 

canvas tutorial 中文版 Canvas - Web API 接口参考 | MDN (mozilla.org) 英文版

Game development | MDN (mozilla.org) canvas开发游戏示例

 

跨越保存图片,将图片画到canvas上,然后保存:Allowing cross-origin use of images and canvas - HTML: HyperText Markup Language | MDN (mozilla.org)

(16条消息) 手把手地教你怎么用canvas的rotate做出类似太阳系(包括月球的公转)的嵌套运动_TNTNT_T的博客-CSDN博客

3d迷宫移动:https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/A_basic_ray-caster

开源 Web 技术示例

 

Canvas c++ 实现

C:\dev\chromium104\src\third_party\blink\renderer\core\html\canvas\html_canvas_element.cc

HTMLCanvasElement::CreateLayer

HTMLCanvasElement::Paint 和 HTMLCanvasElement::PaintInternal 绘制2d,用skp或者image snapshot。需要unacclerate,不用gpu加速。

snapshot = snapshot->MakeUnaccelerated();

webgl:

  if (IsWebGL() && PaintsIntoCanvasBuffer())
    context_->MarkLayerComposited();

 HTMLCanvasElement::Snapshot 可以对2d或者webgl快照出 image。

查看代码
 scoped_refptr<StaticBitmapImage> HTMLCanvasElement::Snapshot(
    SourceDrawingBuffer source_buffer) const {
  if (size_.IsEmpty())
    return nullptr;

  scoped_refptr<StaticBitmapImage> image_bitmap;
  if (OffscreenCanvasFrame()) {  // Offscreen Canvas
    DCHECK(OffscreenCanvasFrame()->OriginClean());
    image_bitmap = OffscreenCanvasFrame()->Bitmap();
  } else if (IsWebGL()) {
    if (context_->CreationAttributes().premultiplied_alpha) {
      context_->PaintRenderingResultsToCanvas(source_buffer);
      if (ResourceProvider())
        image_bitmap = ResourceProvider()->Snapshot();
    } else {
      sk_sp<SkData> pixel_data =
          context_->PaintRenderingResultsToDataArray(source_buffer);
      if (pixel_data) {
        // If the accelerated canvas is too big, there is a logic in WebGL code
        // path that scales down the drawing buffer to the maximum supported
        // size. Hence, we need to query the adjusted size of DrawingBuffer.
        gfx::Size adjusted_size = context_->DrawingBufferSize();
        if (!adjusted_size.IsEmpty()) {
          SkColorInfo color_info =
              GetRenderingContextSkColorInfo().makeAlphaType(
                  kUnpremul_SkAlphaType);
          if (color_info.colorType() == kN32_SkColorType)
            color_info = color_info.makeColorType(kRGBA_8888_SkColorType);
          else
            color_info = color_info.makeColorType(kRGBA_F16_SkColorType);
          image_bitmap = StaticBitmapImage::Create(
              std::move(pixel_data),
              SkImageInfo::Make(
                  SkISize::Make(adjusted_size.width(), adjusted_size.height()),
                  color_info));
        }
      }
    }
  } else if (context_) {
    DCHECK(IsRenderingContext2D() || IsImageBitmapRenderingContext() ||
           IsWebGPU());
    image_bitmap = context_->GetImage();
  }

  if (image_bitmap)
    DCHECK(image_bitmap->SupportsDisplayCompositing());
  else
    image_bitmap = CreateTransparentImage(size_);

  return image_bitmap;
}

HTMLCanvasElement::toDataURL 生成可以在html中写的<image src"data;xx>图片资源。

HTMLCanvasElement::toBlob 都对应有 canvas的js函数吧。

决定是否cpu or gpu 绘制canvas:

   // If the canvas meets the criteria to use accelerated-GPU rendering, and
    // the user signals that the canvas will not be read frequently through
    // getImageData, which is a slow operation with GPU, the canvas will try to
    // use accelerated-GPU rendering.
    // If any of the two conditions fails, or if the creation of accelerated
    // resource provider fails, the canvas will fallback to CPU rendering.
    UMA_HISTOGRAM_BOOLEAN(
        "Blink.Canvas.2DLayerBridge.WillReadFrequently",
        context_ && context_->CreationAttributes().will_read_frequently);

    if (ShouldAccelerate() && context_ &&
        !context_->CreationAttributes().will_read_frequently) {
      canvas2d_bridge_ = Create2DLayerBridge(RasterMode::kGPU);
    }
    if (!canvas2d_bridge_) {
      canvas2d_bridge_ = Create2DLayerBridge(RasterMode::kCPU);
    }

GetSourceImageForCanvas

通知image变化:HTMLCanvasElement::NotifyListenersCanvasChanged()

阻止不支持webgl:HTMLCanvasElement::IsWebGLBlocked()

 

webgl代码:src\third_party\blink\renderer\modules\webgl

canvas代码:src\third_party\blink\renderer\modules\canvas 这个目录的 README.md

 

C:\dev\chromium104\src\third_party\blink\renderer\core\paint\html_canvas_painter.cc

RecordForeignLayer

c:\dev\chromium104\src\third_party\blink\renderer\platform\graphics\paint\foreign_layer_display_item.cc


about://gpu 可以查看目前canvas的gpu开启没有。

What to Know

  • In Chrome, go to Chrome Menu > Settings > Advanced. Under System, enable Use hardware acceleration when available.这个被关掉后,其他开关都打不开gpu了。
  • To force acceleration, enter chrome://flags in the search bar. Under Override software rendering list, set to Enabled, then select Relaunch.  忽略设置的cpu渲染,强制gpu。
  • You can check whether hardware acceleration is turned on in Chrome by typing 

    chrome://gpu

     into the address bar at the top of the browser.

 

chrome在headless模式,是不启动gpu模式的。可以通过chrome://inspect 打开监控的无头浏览器,输入 chrome://gpu查看,全是软渲染。

Issue 765284: Support GPU hardware in headless mode

  • canvas 2d时:

在软渲染时,canvas的绘制指令通过cpu生成成了 layer的picture, 即skp,可以获取skp将其显示。

而在gpu绘制时,生成layer是textureLayer, 通过外部gpu绘制,这时是没有绘制指令的。无法通过skp重现。(可能是直接gpu绘制了?)

canvas webgl (canvas.getContext("webgl") 获得。有2d,3d api。

在canvas是获取的3d webgl上下文画笔时,会需要swiftshader软渲染。是必现生成texturelayer的。

而且某些情况查看layer时会崩溃(与这个网页有关。和webgl无关,直接打开webgl示例可以正常显示layers视图),比如:Simple color animation - Web APIs | MDN (mozilla.org)

Canvas 是 HTML5 提供的一个特性,你可以把它当做一个载体,简单的说就是一张白纸。而 Canvas 2D 相当于获取了内置的二维图形接口,也就是二维画笔。Canvas 3D 是获取基于 WebGL的图形接口,相当于三维画笔。你可以选择不同的画笔在上面作画。

OpenGL是 底层的驱动级的图形接口(是显卡有直接关系的) 类似于 DirectX. 但是这种底层的 OpenGL 是 寄生于浏览器的JavaScript无法涉及的。但是为了让 Web 拥有更强大的 图形处理能力 2010年时候WebGL被推出来。WebGL 允许工程师使用JS 去调用部分封装过的 OpenGL ES2.0 标准接口去 提供硬件级别的3D图形加速功能。
Skia 是一个开源的2D图形库。SwiftShader是一个高性能的,基于CPU的OpenGLES和Direct3D图形APIs的实现。它的目标是为高级3D图形提供硬件独立性。

 

WebGL - Web API 接口参考 | MDN (mozilla.org)

https://threejs.org/ webgl 3d 封装库

threejs入门

webgl教程

WebGL 从画布中截屏

var gl = canvas.getContext("webgl",  {preserveDrawingBuffer: true});

face tracker


 

创建canvas和测试进入加速模式:

TEST_F(HTMLCanvasPainterTest, Canvas2DLayerAppearsInLayerTree) {
  // Insert a <canvas> and force it into accelerated mode.
  // Not using SetBodyInnerHTML() because we need to test before document
  // lifecyle update.
  GetDocument().body()->setInnerHTML("<canvas width=300 height=200>");
  auto* element = To<HTMLCanvasElement>(GetDocument().body()->firstChild());
  CanvasContextCreationAttributesCore attributes;
  attributes.alpha = true;
  CanvasRenderingContext* context =
      element->GetCanvasRenderingContext("2d", attributes);
  gfx::Size size(300, 200);
  std::unique_ptr<Canvas2DLayerBridge> bridge = MakeCanvas2DLayerBridge(size);
  element->SetResourceProviderForTesting(nullptr, std::move(bridge), size);
  ASSERT_EQ(context, element->RenderingContext());
  ASSERT_TRUE(context->IsComposited());
  ASSERT_TRUE(element->IsAccelerated());

  // Force the page to paint.
  element->PreFinalizeFrame();
  context->FinalizeFrame();
  element->PostFinalizeFrame();
  UpdateAllLifecyclePhasesForTest();

  // Fetch the layer associated with the <canvas>, and check that it was
  // correctly configured in the layer tree.
  const cc::Layer* layer = context->CcLayer();
  ASSERT_TRUE(layer);
  EXPECT_TRUE(HasLayerAttached(*layer));
  EXPECT_EQ(gfx::Size(300, 200), layer->bounds());
}

 

1, Home » Porting » Connecting C++ and JavaScript 

posted @ 2022-03-10 09:26  Bigben  阅读(949)  评论(0编辑  收藏  举报