ios 画板的使用

由于项目需求需要用到一个画板功能,需要这个画板可以实时的画,并且需要保存画板点集合从一端发送给另一端 达到一个实时同步的功能,前后使用了三种方法,每一种都遇到各种坑(后面会提到,每一种方法的优缺点),而且现在能百度到的demo普遍偏简单,分享出来给大家一个参照吧。<UIBezierPath画线,NSUndoManager+ Quartz2D ,OpenGLES>


 
画板.gif

Demo UI写的很是糙 大家不要吐槽。


一. UIBezierPath 画板方法的实现

1 UIBezierPath
使用UIBezierPath可以创建基于矢量的路径,此类是Core Graphics框架关于路径的封装。使用此类可以定义简单的形状,如椭圆、矩形或者有多个直线和曲线段组成的形状等。

UIBezierPath是CGPathRef数据类型的封装。如果是基于矢量形状的路径,都用直线和曲线去创建。我们使用直线段去创建矩形和多边形,使用曲线去创建圆弧(arc)、圆或者其他复杂的曲线形状。

使用UIBezierPath画图步骤:

创建一个UIBezierPath对象
调用-moveToPoint:设置初始线段的起点
添加线或者曲线去定义一个或者多个子路径
改变UIBezierPath对象跟绘图相关的属性。如,我们可以设置画笔的属性、填充样式等

基本介绍就不多说了,大家可以参照下面这位简友的博客,写的很详细了:

http://www.jianshu.com/p/734b34e82135

2  下面介绍一下实现细节
    
    2.1 因为画板涉及到橡皮擦和切换画笔颜色的功能,所以需要一个  继承UIBezierPath的子类,添加两个属性用于记录路径的颜色和是否是橡皮擦状态

   //画笔的颜色
    @property (nonatomic,copy) UIColor *lineColor;
    //是否是橡皮擦
    @property (nonatomic,assign) BOOL isErase;`


     2.2  画板的View里实现touchesBegan,touchesMoved,touchesEnded的代理方法
 - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent      *)event{
    UITouch *touch = [touches anyObject];
  CGPoint currentPoint = [touch locationInView:self];
  //touchesBegan方法中初始化beziPath moveToPoint
    self.beziPath = [[DCBeizierPath alloc] init];
    self.beziPath.lineColor = self.lineColor;
    self.beziPath.isErase = self.isErase;
    [self.beziPath moveToPoint:currentPoint];
     // 并将路径保存到数组中以保存数据
     [self.beziPathArrM addObject:self.beziPath];
}

- (void)touchesMoved:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event{
  UITouch *touch = [touches anyObject];
  CGPoint currentPoint = [touch locationInView:self];
  CGPoint previousPoint = [touch previousLocationInView:self];
  CGPoint midP = midPoint(previousPoint,currentPoint);
    //  touchesMoved方法中添加每一个点到self.beziPath中
   // 使用二次贝塞尔曲线比使用addLine画线更圆润拐点没有尖角
[self.beziPath addQuadCurveToPoint:currentPoint controlPoint:midP];
   // 并主动调用绘画方法 <setNeedsDisplay会自动调用drawrect方法 不要直接调用drawrect方法>
 [self setNeedsDisplay];
}

- (void)touchesEnded:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event{
  UITouch *touch = [touches anyObject];
  CGPoint currentPoint = [touch locationInView:self];
  CGPoint previousPoint = [touch previousLocationInView:self];
  CGPoint midP = midPoint(previousPoint,currentPoint);
  [self.beziPath addQuadCurveToPoint:currentPoint   controlPoint:midP];
  // touchesMoved
  [self setNeedsDisplay];
}
// 计算中间点
CGPoint midPoint(CGPoint p1, CGPoint p2)
{
    return CGPointMake((p1.x + p2.x) * 0.5, (p1.y + p2.y) * 0.5);
} 


2.3 实现drawrect方法
  #pragma mark - 绘画方法
  - (void)drawRect:(CGRect)rect
  {
//获取上下文
    if(self.beziPathArrM.count){
        for (DCBeizierPath *path  in self.beziPathArrM) {
            if (path.isErase) {
                 // 橡皮擦设置无色
                [[UIColor clearColor] setStroke];
            }else{
                // 设置画笔颜色
                [path.lineColor setStroke];
          }
        
        path.lineJoinStyle = kCGLineJoinRound;
        path.lineCapStyle = kCGLineCapRound;
        if (path.isErase) {
            path.lineWidth = kEraseLineWidth;
            // 设置橡皮擦状态的画线的模式
            [path strokeWithBlendMode:kCGBlendModeCopy alpha:1.0];
        } else {
            path.lineWidth = kLineWidth;
            // 设置正常画线的画线模式
            [path strokeWithBlendMode:kCGBlendModeNormal alpha:1.0];
        }
        [path stroke];
    }
}
[super drawRect:rect];
}


2.4 设置清楚画板功能
    - (void)clear{
        [self.beziPathArrM removeAllObjects];
      [self setNeedsDisplay];
    }

   3. 总结看到这里有人可能会说了 这不很简单么没什么难的啊,都是最基础的东西。下面我来说说我发现的优缺点和这种方法的使用场
  • 优点
    1. 这种实现方式最简单 而且也是直接调用oc的API实现,方法实现较为简单。
    2.用已知存储的点 添加路径 再绘制,速度很快。

  • 缺点
    1.如果需要保持每条你画的线都在,你需要保存每一条绘画路径。
    2.每次在绘画新添加的绘画线条的时候,都要把这条线段之前所有的线段在重绘一次,浪费系统性能。
    3.如果你不在乎这点性能的浪费,那么还有问题,当你越画线段越多的时候 屏幕识别点的距离会越来越大,并且明显能感觉到绘画速度变慢 逐渐能看到之前线段绘画的轨迹。

  • 应用场景
    1.一次性画一些简单的线段,并且不做修改的情况下可以使用。
    2.UI上需要做一些效果的简单线段可以使用。
    3.需要频繁修改和绘画的情况下,不建议使用。

二. NSUndoManager+ Quartz2D 画板方法的实现

根据上面一种的方法的优缺点和明确我们的需求,我们要找的是一种效率更高更底层的方法,并且不需要重绘我们已经画好的线段。所以我们就有了NSUndoManager+ Quartz2D这种组合。

1、NSUndoManager简单说明

每个人都会犯错误。多亏了 Foundation 库提供了比拼写错误更强大的功能来解救我们。Cocoa 有一套简单强壮的 NSUndoManager API 管理撤销和重做。

默认地,每个应用的 window 都有一个 undo manager,每一个响应链条中的对象都可以管理一个自定义的 undo manager 来管理各自页面上本地操作的撤销和重做操作。UITextField 和 UITextView 用这个功能自动提供了文本编辑的撤销重做支持。然而,标明哪些动作可以被撤销是留给应用开发工程师的工作。

创建一个可以撤销的动作需要三步:做出改变,注册一个可以逆向的 "撤销操作",响应撤销改变的动作。
详细参照 :http://nshipster.cn/nsundomanager/

2、 Quartz2D简单说明

    在画线的时候,方法的内部默认创建一个path。它把路径都放到了path里面去。
  >1.创建路径  CGMutablePathRef 调用该方法相当于创建了一个路径,这个路径用来保存绘图信息。
 > 2.把绘图信息添加到路径里边。 以前的方法是点的位置添加到ctx(图形上下文信息)中,ctx 默认会在内  部创建一个path用来保存绘图信息。在图形上下文中有一块存储空间专门用来存储绘图信息,其实这块空间就是CGMutablePathRef。

  >3.把路径添加到上下文中。

3、结合这两个方法既可以做到提高绘画效率又可以实时保存避免重绘已经画好的线段


4、具体实现细节

- (void)awakeFromNib{
[self setup];

}

// 画笔的初始化设置
-(void)setup
{
self.multipleTouchEnabled = YES;
// 设置初始笔宽
self.lineWidth = 5;
// 设置初始画笔颜色
self.lineColor =[UIColor blackColor];

//初始化NSUndoManager
NSUndoManager *tempUndoManager = [[NSUndoManager alloc] init];
[tempUndoManager setLevelsOfUndo:10];
[self setUndoManager:tempUndoManager];
}


// 清楚画面和存储的数据
    -(void)clear
  {
    //    [self setImage:nil];
    self.previousPoint1=CGPointMake(0, 0);
    self.previousPoint2=CGPointMake(0, 0);
    self.currentPoint = CGPointMake(0, 0);
    [self setNeedsDisplay];
  }


// 计算中间点
CGPoint midPoint1(CGPoint p1, CGPoint p2)
{
  return CGPointMake((p1.x + p2.x) * 0.5, (p1.y + p2.y) * 0.5);
}

pragma mark -touchesBegan方法

-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
  UITouch *touch = [touches anyObject];

CGPoint currentPoint = [touch locationInView:self];

self.previousPoint1 = [touch locationInView:self];
self.previousPoint2 = [touch locationInView:self];
self.currentPoint = [touch locationInView:self];

CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path, NULL, self.previousPoint1.x, self.previousPoint1.y);


[self.pointsArrM removeAllObjects];
// 添加点集合
NSDictionary *dict = @{@"x":@(currentPoint.x),@"y":@(currentPoint.y)};
[self.pointsArrM addObject:dict];
}

-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
  UITouch *touch  = [touches anyObject];

CGPoint currentPoint    = [touch locationInView:self];
self.previousPoint2  = self.previousPoint1;
self.previousPoint1  = [touch previousLocationInView:self];
self.currentPoint    = [touch locationInView:self];

CGPoint mid1    = midPoint1(self.previousPoint1, self.previousPoint2);
CGPoint mid2    = midPoint1(self.currentPoint, self.previousPoint1);
CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path, NULL, mid1.x, mid1.y);
CGPathAddQuadCurveToPoint(path, NULL, self.previousPoint1.x, self.previousPoint1.y, mid2.x, mid2.y);

CGRect bounds = CGPathGetBoundingBox(path);
CGPathRelease(path);
CGRect drawBox = bounds;

//Pad our values so the bounding box respects our line width
drawBox.origin.x        -= self.lineWidth * 2;
drawBox.origin.y        -= self.lineWidth * 2;
drawBox.size.width      += self.lineWidth * 4;
drawBox.size.height     += self.lineWidth * 4;

UIGraphicsBeginImageContext(drawBox.size);

[self.layer renderInContext:UIGraphicsGetCurrentContext()];
self.curImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

[self setNeedsDisplayInRect:drawBox];
//
NSDictionary *dict = @{@"x":@(currentPoint.x),@"y":@(currentPoint.y)};

[self.pointsArrM addObject:dict];
}

-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
//     if([touches count] >= 2)return;
UITouch *touch  = [touches anyObject];
CGPoint currentPoint    = [touch locationInView:self];
NSDictionary *dict = @{@"x":@(currentPoint.x),@"y":@(currentPoint.y)};
self.previousPoint2  = self.previousPoint1;
self.previousPoint1  = [touch previousLocationInView:self];
self.currentPoint    = [touch locationInView:self];

CGPoint mid1    = midPoint1(self.previousPoint1, self.previousPoint2);
CGPoint mid2    = midPoint1(self.currentPoint, self.previousPoint1);

CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path, NULL, mid1.x, mid1.y);
CGPathAddQuadCurveToPoint(path, NULL, self.previousPoint1.x, self.previousPoint1.y, mid2.x, mid2.y);

//绘画
CGRect bounds = CGPathGetBoundingBox(path);
CGPathRelease(path);
CGRect drawBox = bounds;

//Pad our values so the bounding box respects our line width
drawBox.origin.x        -= self.lineWidth * 2;
drawBox.origin.y        -= self.lineWidth * 2;
drawBox.size.width      += self.lineWidth * 4;
drawBox.size.height     += self.lineWidth * 4;

UIGraphicsBeginImageContext(drawBox.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
self.curImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

[self setNeedsDisplayInRect:drawBox];

[self.pointsArrM addObject:dict];
}

pragma mark - 绘画方法

- (void)drawRect:(CGRect)rect
{
//获取上下文
[self.curImage drawAtPoint:CGPointMake(0, 0)];
CGPoint mid1 = midPoint1(self.previousPoint1, self.previousPoint2);
CGPoint mid2 = midPoint1(self.currentPoint, self.previousPoint1);

self.context = UIGraphicsGetCurrentContext();
[self.layer renderInContext:self.context];

CGContextMoveToPoint(self.context, mid1.x, mid1.y);

// 添加画点
CGContextAddQuadCurveToPoint(self.context, self.previousPoint1.x, self.previousPoint1.y, mid2.x, mid2.y);
// 设置圆角
CGContextSetLineCap(self.context, kCGLineCapRound);

// 设置线宽
CGContextSetLineWidth(self.context, self.isErase? kEraseLineWidth:kLineWidth);

// 设置画笔颜色
CGContextSetStrokeColorWithColor(self.context, self.isErase?[UIColor clearColor].CGColor:self.lineColor.CGColor);


CGContextSetLineJoin(self.context, kCGLineJoinRound);

// 根据是否是橡皮擦设置设置画笔模式
CGContextSetBlendMode(self.context, self.isErase ? kCGBlendModeDestinationIn:kCGBlendModeNormal);

CGContextStrokePath(self.context);

[super drawRect:rect];
}

5、总结及优缺点

  • 优点
    1. 绘画方法较为底层实现效率更高更快。
    2.每次绘画都很流畅 不会有延迟感 不会重绘已经画好的绘画路径。
    3.线段更加圆润

  • 缺点
    1.如果有已知点集合,重绘所有点路径 会消耗很长时间才能画完。
    2.如果App消耗性能过多的话<我们的App起着一个视频会话,
    一个通信会话还有一些很多控件的交互>,在Pad3上绘画 会有断点
    <Pad2,mini2 3,Air都没有这个问题<iPhone还没测试过4s和5>>,
    原因可能在于:Pad3是Retina屏幕 分辨率增长一倍
    但是Pad3的CPU GPU比Pad2却只增长了50%左右,
    导致不能连续识别到屏幕的触点,从而导致出现断点。

  • 使用场景
    1.App不太消耗性能的情况下且不需要重绘的情况下可以使用。
    2.只在一个页面绘画且绘画后不需要重绘的。
    3.不care这点时间消耗的。

三、OpenGLES画板的实现

虽然上面各有各的优点,但是一项项无法跨过的坑悲催着我再找另一种解决方案,更底层的openGLES,本人之前也没有用过,了解的也不是太多,基本知识大多也是从别人的博客上扒下来的.能应用到实际场景也要归功于apple的官方demo

地址:http://www.jianshu.com/p/7d4710b815c2/ 里面有一个demo是说如何通过openGLES来实现画板功能的一个demo。我也是依葫芦画瓢套了一下。下面说下具体的实现细节:

  • 准备工作
    1.需要添加OpenGLES.framework系统库。并导入头文件
    #import <OpenGLES/EAGL.h>
    #import <OpenGLES/ES2/gl.h>
    #import <OpenGLES/ES2/glext.h>
    #import <GLKit/GLKit.h>

    2.导入配置文件 具体作用就不详述了。
    #import "shaderUtil.h"
    #import "fileUtil.h"
    #import "debug.h"


     
    屏幕快照 2016-05-20 14.37.58.png

    3.这个半透明的图片很重要 相当于笔触 通过他的透明度来控制渲染笔画颜色的深浅


     
    屏幕快照 2016-05-20 14.41.41.png
  • 初始化设置
    1.OpenGLES的初始化设置

     // 创建一个纹理的图像
     - (textureInfo_t)textureFromName:(NSString *)name
     {
     CGImageRef      brushImage;
     CGContextRef    brushContext;
     GLubyte         *brushData;
     size_t          width, height;
     GLuint          texId;
     textureInfo_t   texture;
    
     // First create a UIImage object from the data in a image file, and then extract the Core Graphics image
     brushImage = [UIImage imageNamed:name].CGImage;
    
       // Get the width and height of the image
     width = CGImageGetWidth(brushImage);
     height = CGImageGetHeight(brushImage);
    
     // Make sure the image exists
     if(brushImage) {
     // Allocate  memory needed for the bitmap context
     brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
     // Use  the bitmatp creation function provided by the Core Graphics framework.
     brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
     // After you create the context, you can draw the  image to the context.
     CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
     // You don't need the context at this point, so you need to release it to avoid memory leaks.
     CGContextRelease(brushContext);
     // Use OpenGL ES to generate a name for the texture.
     // //创建渲染缓冲管线
     glGenTextures(1, &texId);
     // Bind the texture name.
     //绑定渲染缓冲管线
     glBindTexture(GL_TEXTURE_2D, texId);
     // Set the texture parameters to use a minifying filter and a linear filer (weighted average)
     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
     // Specify a 2D texture image, providing the a pointer to the image data in memory
     glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)width, (int)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
     // Release  the image data; it's no longer needed
     free(brushData);
     
     texture.id = texId;
     texture.width = (int)width;
     texture.height = (int)height;
     }
    
     return texture;
     }
    
     // 初始化GL  并设置笔触
     - (BOOL)initGL
       {
       // Generate IDs for a framebuffer object and a color renderbuffer
     ////创建帧缓冲管线
     glGenFramebuffers(1, &viewFramebuffer);
     //绑定渲染缓冲管线
     glGenRenderbuffers(1, &viewRenderbuffer);
    
     //绑定帧缓冲管线
     glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
    
     //将渲染缓冲区附加到帧缓冲区上
     glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
     // This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
     // allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
     [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(id<EAGLDrawable>)self.layer];
     glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, viewRenderbuffer);
    
     glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
     glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
    
     // For this sample, we do not need a depth buffer. If you do, this is how you can create one and attach it to the framebuffer:
     //    glGenRenderbuffers(1, &depthRenderbuffer);
     //    glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
      //    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, backingWidth, backingHeight);
       //    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
    
       if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
     {
     NSLog(@"failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
     return NO;
     }
    
         //创建显示区域
     glViewport(0, 0, backingWidth, backingHeight);
    
     // Create a Vertex Buffer Object to hold our data
     glGenBuffers(1, &vboId);
    
         // Load the brush texture
     // 设置笔头
       brushTexture = [self textureFromName:@"Particle"];
     
     // Load shaders
     [self setupShaders];
    
     // Enable blending and set a blending function appropriate for premultiplied alpha pixel data
     glEnable(GL_BLEND);
     glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
    
     return YES;
     }
    
    • (BOOL)resizeFromLayer:(CAEAGLLayer *)layer
      {
      // Allocate color buffer backing based on the current layer size
      glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
      [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer];
      glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
      glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);

    // For this sample, we do not need a depth buffer. If you do, this is how you can allocate depth buffer backing:
    // glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
    // glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, backingWidth, backingHeight);
    // glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);

    if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
    {
    NSLog(@"Failed to make complete framebuffer objectz %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
    return NO;
    }

    // Update projection matrix
    GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0, backingWidth, 0, backingHeight, -1, 1);
    GLKMatrix4 modelViewMatrix = GLKMatrix4Identity; // this sample uses a constant identity modelView matrix
    GLKMatrix4 MVPMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);

    glUseProgram(program[PROGRAM_POINT].id);
    glUniformMatrix4fv(program[PROGRAM_POINT].uniform[UNIFORM_MVP], 1, GL_FALSE, MVPMatrix.m);

    // Update viewport
    glViewport(0, 0, backingWidth, backingHeight);

    return YES;
    }

    • (void)setupShaders
      {
      for (int i = 0; i < NUM_PROGRAMS; i++)
      {
      char *vsrc = readFile(pathForResource(program[i].vert));
      char *fsrc = readFile(pathForResource(program[i].frag));
      GLsizei attribCt = 0;
      GLchar *attribUsed[NUM_ATTRIBS];
      GLint attrib[NUM_ATTRIBS];
      GLchar *attribName[NUM_ATTRIBS] = {
      "inVertex",
      };
      const GLchar *uniformName[NUM_UNIFORMS] = {
      "MVP", "pointSize", "vertexColor", "texture",
      };

      // auto-assign known attribs
      for (int j = 0; j < NUM_ATTRIBS; j++)
      {
      if (strstr(vsrc, attribName[j]))
      {
      attrib[attribCt] = j;
      attribUsed[attribCt++] = attribName[j];
      }
      }

      glueCreateProgram(vsrc, fsrc,
      attribCt, (const GLchar **)&attribUsed[0], attrib,
      NUM_UNIFORMS, &uniformName[0], program[i].uniform,
      &program[i].id);
      free(vsrc);
      free(fsrc);

      // Set constant/initalize uniforms
      if (i == PROGRAM_POINT)
      {
      glUseProgram(program[PROGRAM_POINT].id);

        // the brush texture will be bound to texture unit 0
        glUniform1i(program[PROGRAM_POINT].uniform[UNIFORM_TEXTURE], 0);
        
        // viewing matrices
        GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0, backingWidth, 0, backingHeight, -1, 1);
        GLKMatrix4 modelViewMatrix = GLKMatrix4Identity; // this sample uses a constant identity modelView matrix
        GLKMatrix4 MVPMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
        
        glUniformMatrix4fv(program[PROGRAM_POINT].uniform[UNIFORM_MVP], 1, GL_FALSE, MVPMatrix.m);
        
        // point size
        glUniform1f(program[PROGRAM_POINT].uniform[UNIFORM_POINT_SIZE], brushTexture.width / kBrushScale);
        
        // initialize brush color
        glUniform4fv(program[PROGRAM_POINT].uniform[UNIFORM_VERTEX_COLOR], 1, brushColor);
      

      }
      }
      glError();
      }

4.绘画方法

// 根据两点画线的方法
- (void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end
{
//     NSLog(@"drawLineWithPoints--%@--%@",NSStringFromCGPoint(start),NSStringFromCGPoint(end));
static GLfloat*     vertexBuffer = NULL;
static NSUInteger   vertexMax = 64;
NSUInteger          vertexCount = 0,
count,
i;

[EAGLContext setCurrentContext:context];
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);

// Convert locations from Points to Pixels
CGFloat scale = self.contentScaleFactor;

start.x *= scale;
start.y *= scale;
end.x *= scale;
end.y *= scale;

// Allocate vertex array buffer
if(vertexBuffer == NULL)
    vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));

// Add points to the buffer so there are drawing points every X pixels
count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1);
for(i = 0; i < count; ++i) {
    if(vertexCount == vertexMax) {
        vertexMax = 2 * vertexMax;
        vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));
    }
    
    vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count);
    vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count);
    vertexCount += 1;
}

// Load data to the Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, vertexCount*2*sizeof(GLfloat), vertexBuffer, GL_STATIC_DRAW);


glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 0, 0);

// Draw
glUseProgram(program[PROGRAM_POINT].id);

// 画线
glDrawArrays(GL_POINTS, 0, (int)vertexCount);

// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}

5. 清楚已绘画画板
// 清楚
- (void)clearDrawImageView
{
[EAGLContext setCurrentContext:context];

// Clear the buffer
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);

// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
  }

6.设置画笔颜色
- (void)setBrushColorWithRed:(CGFloat)red green:(CGFloat)green blue:(CGFloat)blue alpha:(CGFloat)alpha
{
// Update the brush color
brushColor[0] = red ;
brushColor[1] = green ;
brushColor[2] = blue ;
brushColor[3] = alpha;

if (initialized) {
    glUseProgram(program[PROGRAM_POINT].id);
    // 设置画笔颜色
    glUniform4fv(program[PROGRAM_POINT].uniform[UNIFORM_VERTEX_COLOR], 1, brushColor);
}
}



7. 释放内存
- (void)dealloc
{
// Destroy framebuffers and renderbuffers
if (viewFramebuffer) {
    glDeleteFramebuffers(1, &viewFramebuffer);
    viewFramebuffer = 0;
}
if (viewRenderbuffer) {
    glDeleteRenderbuffers(1, &viewRenderbuffer);
    viewRenderbuffer = 0;
}
if (depthRenderbuffer)
{
    glDeleteRenderbuffers(1, &depthRenderbuffer);
    depthRenderbuffer = 0;
}
// texture
if (brushTexture.id) {
    glDeleteTextures(1, &brushTexture.id);
    brushTexture.id = 0;
}
// vbo
if (vboId) {
    glDeleteBuffers(1, &vboId);
    vboId = 0;
}

// tear down context
if ([EAGLContext currentContext] == context)
    [EAGLContext setCurrentContext:nil];
}


8.  通过UIView的touch代理方法获得触点

  // touchesBegan 方法  设置起点
  - (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event
{
//    CGRect    bounds = [self bounds];
UITouch*  touch = [[event touchesForView:self] anyObject];

//    firstTouch = YES;

// 转换触点从UIView引用到OpenGL 1(倒翻转)
// Convert touch point from UIView referential to OpenGL one (upside-down flip)
_previousLocation = [touch locationInView:self];
_previousLocation.y = self.height - _previousLocation.y;

CGPoint currentPoint = [touch locationInView:self];
NSDictionary *dict = @{@"x":@(currentPoint.x),@"y":@(currentPoint.y)};

[self.pointsArrM addObject:dict];

//    NSLog(@"touchesBegan--%@",dict);
}

// touch moved方法 开始绘画
- (void)touchesMoved:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event
{
//    CGRect    bounds = [self bounds];
UITouch* touch = [[event touchesForView:self] anyObject];

_location = [touch locationInView:self];
_location.y = self.height - _location.y;
_previousLocation = [touch previousLocationInView:self];
_previousLocation.y = self.height - _previousLocation.y;

[self renderLineFromPoint:_previousLocation toPoint:_location];

CGPoint currentPoint = [touch locationInView:self];
NSDictionary *dict = @{@"x":@(currentPoint.x),@"y":@(currentPoint.y)};
[self.pointsArrM addObject:dict];
//    NSLog(@"touchesMoved--%@",dict);

}

// touchesEnded方法 设置终点
- (void)touchesEnded:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event
{

UITouch*  touch = [[event touchesForView:self] anyObject];
_location = [touch locationInView:self];
_location.y = self.height - _location.y;

_previousLocation = [touch previousLocationInView:self];
_previousLocation.y = self.height - _previousLocation.y;

[self renderLineFromPoint:_previousLocation toPoint:_location];

_location = CGPointMake(0, 0);
_previousLocation = CGPointMake(0, 0);

CGPoint currentPoint = [touch locationInView:self];
NSDictionary *dict = @{@"x":@(currentPoint.x),@"y":@(currentPoint.y)};
[self.pointsArrM addObject:dict];


//    NSLog(@"touchesEnded--%@",dict);
}

//touchesCancelled 方法  设置终点  这里需要解释一下 ,因为我们的画板是一个scrollView,
 // 需要一个两指拖动画板位置的功能,而两指触到画板抬起的时候,
  //有时候会造成不进入touchesEnded 进入touchesCancelled方法
  //所以在这里和touchesEnded的处理方式是一样的
- (void)touchesCancelled:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event{
UITouch*  touch = [[event touchesForView:self] anyObject];
_location = [touch locationInView:self];
_location.y = self.height - _location.y;

_previousLocation = [touch previousLocationInView:self];
_previousLocation.y = self.height - _previousLocation.y;

[self renderLineFromPoint:_previousLocation toPoint:_location];

_location = CGPointMake(0, 0);
_previousLocation = CGPointMake(0, 0);

CGPoint currentPoint = [touch locationInView:self];
NSDictionary *dict = @{@"x":@(currentPoint.x),@"y":@(currentPoint.y)};
[self.pointsArrM addObject:dict];
}


2. 需要重写View的layerClass方法,否则无法渲染到当前layer层,但是覆盖之后也不能通过原生的方法绘画,这是一个注意点。
   + (Class)layerClass
   {
       return [CAEAGLLayer class];
   }

    //设置画笔颜色
   - (void)setLineColor:(UIColor *)lineColor{
      _lineColor = lineColor;
    if (lineColor == [UIColor blackColor]) {
     [self setBrushColorWithRed:0 green:0 blue:0 alpha:1];
      }
   else if (lineColor == [UIColor redColor]) {
        [self setBrushColorWithRed:1 green:0 blue:0 alpha:1];
      }
   else if (lineColor == [UIColor greenColor]) {
    [self setBrushColorWithRed:0 green:1 blue:0 alpha:1];
    }
   else if (lineColor == [UIColor greenColor])
   {
       [self setBrushColorWithRed:0 green:0 blue:1 alpha:1];
   }else  {
        [self setBrushColorWithRed:0 green:0 blue:0 alpha:1];
    }
    }

      // 设置是否是橡皮擦

    - (void)setIsErase:(BOOL)isErase
    {
      _isErase = isErase;
    if (isErase) {
    
    [self setBrushColorWithRed:0 green:0 blue:0 alpha:0];
    // 设置绘画模式
    glBlendFunc(GL_ONE, GL_ZERO);
    
      }else{
    [self setBrushColorWithRed:1 green:0 blue:0 alpha:1];
    glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
    }
    }

   如果是通过storyboard添加的画板需要在-initWithCoder:(NSCoder*)coder中完成初始化设置
    - (id)initWithCoder:(NSCoder*)coder {

      if ((self = [super initWithCoder:coder])) {
    
    // 2、在init的方法中,从基类获取layer属性,并将其转型至CAEAGLLayer
    CAEAGLLayer *eaglLayer = (CAEAGLLayer *)super.layer;
     eaglLayer.opaque = YES;//无需Quartz处理透明度
    // In this application, we want to retain the EAGLDrawable contents after a call to presentRenderbuffer.
    
    eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
                                    [NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
    
    /*
     后面的参数有两个选择
     kEAGLRenderingAPIOpenGLES1=1  表示用渲染库的API版本是1.1
     kEAGLRenderingAPIOpenGLES2=2  表示用渲染库的API版本是2.0
     */
    context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

    // //设定当期上下文对象
    if (!context || ![EAGLContext setCurrentContext:context]) {
        return nil;
    }
    // Set the view's scale factor as you wish
    self.contentScaleFactor = [[UIScreen mainScreen] scale];
    
    // Make sure to start with a cleared buffer
    needsErase = YES;
    }
     return self;
    }


    3.//如果我们的视图的大小,我们将要求布局子视图。
      //这是一个完美的机会来更新framebuffer这样
      //同样大小的显示区域。
    -(void)layoutSubviews
  {
      [EAGLContext setCurrentContext:context];

    if (!initialized) {
        initialized = [self initGL];
    }
    else {
        [self resizeFromLayer:(CAEAGLLayer*)self.layer];
    }

    // Clear the framebuffer the first time it is allocated
    if (needsErase) {
        [self clearDrawImageView];
        needsErase = NO;
    }
 }
总结
  • 优点
    1. 很底层,绘画速度更快,直接通过硬件的渲染,解决了上一个在iPad3硬件下绘画会有断点的bug。
    2.性能更好。

  • 缺点
    1.暂时我还没找到 画弧线的方法。
    2.更底层,API可读性太差 没有注释 根本看不懂 有注释的也没看懂几个。
    3.通过已知点,重绘的速度也慢,好在于相对于上一种方法的慢他是可以看到绘画轨迹的,可能适用于一些特殊的需求。

  • 个人集成后遇到的坑
    1.橡皮擦和画笔状态切换的时候回造成状态失效,原因不详...解决方案:每次touchBegain的时候都再次设置一次。
    2.橡皮擦状态下,擦除画笔的时候会有一个小圆点一直跟随笔迹,原因不详...解决方案同上。

  • 应用场景
    1.一次性画一些简单的线段,并且不做修改的情况下可以使用。
    2.UI上需要做一些效果的简单线段可以使用。
    3.需要频繁修改和绘画的情况下,不建议使用。

四、Demo地址

先奉上demo:https://github.com/ddc391565320/DCDrawView 如有不当之处欢迎指正



作者:踏遍青山
链接:https://www.jianshu.com/p/8c145884cf2c
來源:简书
简书著作权归作者所有,任何形式的转载都请联系作者获得授权并注明出处。
posted @ 2018-09-25 10:12  SoulDu  阅读(538)  评论(0编辑  收藏  举报