Showing posts with label opengl. Show all posts
Showing posts with label opengl. Show all posts

Saturday, October 13, 2018

GLFW has long delay when creating a window

Leave a Comment

I'm using GLFW for the first time. Pulled the latest stable release (3.2.1) and I'm using the example code found on the GLFW website:

#include <GLFW/glfw3.h>  int main(void) {     GLFWwindow* window;      /* Initialize the library */     if (!glfwInit())         return -1;      /* Create a windowed mode window and its OpenGL context */     window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);     if (!window)     {         glfwTerminate();         return -1;     }      /* Make the window's context current */     glfwMakeContextCurrent(window);      /* Loop until the user closes the window */     while (!glfwWindowShouldClose(window))     {         /* Render here */         glClear(GL_COLOR_BUFFER_BIT);          /* Swap front and back buffers */         glfwSwapBuffers(window);          /* Poll for and process events */         glfwPollEvents();     }      glfwTerminate();     return 0; } 

There's a fairly long delay (20 seconds or so) on the call to choosePixelFormat() (wgl_context.c) - nativeCount has a value of 627 and it seems to just take a long time in the for loop.

There's no delay if I use freeGLUT to create a window or if I just create a window directly with WinAPI calls (CreateWindow, etc) and set up the PFD myself.

I'm using Windows 10, tried it first with Visual Studio 2015 and then in 2017. Graphics card is NVidia Quadro M6000.

I did slightly modify the above code to add a call to initialize glew, but having this call or not did not change the delay.

0 Answers

Read More

Wednesday, April 18, 2018

Performant 2D OpenGL graphics in R for fast display of raster image using qtpaint (qt) or rdyncall (SDL/OpenGL) packages?

Leave a Comment

For a real-time interactive Mandelbrot viewer I was making in R & Rcpp+OpenMP & Shiny I am on the lookout for a performant way to display 1920x1080 matrices as raster images in the hope of being able to achieve ca. 5-10 fps (calculating the Mandelbrot images themselves now achieves ca. 20-30 fps at moderate zooms, and certainly scrolling around should go fast). Using image() with option useRaster=TRUE, plot.raster or even grid.raster() still doesn't quite cut it, so I am on the lookout for a more performant option, ideally using OpenGL acceleration.

I noticed that there are qt wrapper packages qtutils and qtpaint http://finzi.psych.upenn.edu/R/library/qtutils/html/sceneDevice.html where you can set argument opengl=TRUE and http://finzi.psych.upenn.edu/R/library/qtpaint/html/qplotView.html again with argument opengl=TRUE and http://finzi.psych.upenn.edu/R/library/qtpaint/html/painting.html.

And I also noticed that one should be able to call SDL and GL/OpenGL functions using the rdyncall package (install from https://cran.r-project.org/src/contrib/Archive/rdyncall/ and SDL from https://www.libsdl.org/download-1.2.php)`, demos available at http://hg.dyncall.org/pub/dyncall/bindings/file/87fd9f34eaa0/R/rdyncall/demo/00Index, e.g. http://hg.dyncall.org/pub/dyncall/bindings/file/87fd9f34eaa0/R/rdyncall/demo/randomfield.R).

Am I correct that with these packages one should be able to display a 2D image raster using opengl acceleration? If so, has anyone any thoughts how to do this (I'm asking because I'm not an expert in either qt or SDL/OpenGL)?

Some timings of non-OpenGL options which are too slow for my application:

# some example data & desired colour mapping of [0-1] ranged data matrix library(RColorBrewer) ncol=1080 cols=colorRampPalette(RColorBrewer::brewer.pal(11, "RdYlBu"))(ncol) colfun=colorRamp(RColorBrewer::brewer.pal(11, "RdYlBu")) col = rgb(colfun(seq(0,1, length.out = ncol)), max = 255) mat=matrix(seq(1:1080)/1080,nrow=1920,ncol=1080,byrow=TRUE) mat2rast = function(mat, col) {   idx = findInterval(mat, seq(0, 1, length.out = length(col)))   colors = col[idx]   rastmat = t(matrix(colors, ncol = ncol(mat), nrow = nrow(mat), byrow = TRUE))   class(rastmat) = "raster"   return(rastmat) } system.time(mat2rast(mat, col)) # 0.24s  # plot.raster method - one of the best? par(mar=c(0, 0, 0, 0)) system.time(plot(mat2rast(mat, col), asp=NA)) # 0.26s  # grid graphics - tie with plot.raster? library(grid) system.time(grid.raster(mat2rast(mat, col),interpolate=FALSE)) # 0.28s  # base R image() par(mar=c(0, 0, 0, 0)) system.time(image(mat,axes=FALSE,useRaster=TRUE,col=cols)) # 0.74s # note Y is flipped to compared to 2 options above - but not so important as I can fill matrix the way I want  # magick - browser viewer, so no good.... # library(magick) # image_read(mat2rast(mat, col))  # imager - doesn't plot in base R graphics device, so this one won't work together with Shiny # If you wouldn't have to press ESC to return control to R this # might have some potential though... library(imager) display(as.cimg(mat2rast(mat, col)))  # ggplot2 - just for the record... df=expand.grid(y=1:1080,x=1:1920) df$z=seq(1,1080)/1080 library(ggplot2) system.time({q <- qplot(data=df,x=x,y=y,fill=z,geom="raster") +                  scale_x_continuous(expand = c(0,0)) +                  scale_y_continuous(expand = c(0,0)) +                 scale_fill_gradientn(colours = cols) +                  theme_void() + theme(legend.position="none"); print(q)}) # 11s  

1 Answers

Answers 1

According to the RGL package introduction, it is :

a visualization device system for R, using OpenGL as the rendering backend. An rgl device at its core is a real-time 3D engine written in C++. It provides an interactive viewpoint navigation facility (mouse + wheel support) and an R programming interface.

As RGL is a real time 3D engine, I expect that using RGL for 2D will give you a fast display.

Please note that this is an old project so I am not sure that it fit your requirement.

You can take a look on this paper and see some result images in this gallery

Read More

Wednesday, January 31, 2018

Performant 2D OpenGL graphics in R for fast display of raster image using qtpaint (qt) or rdyncall (SDL/OpenGL) packages?

Leave a Comment

For a real-time interactive Mandelbrot viewer I was making in R & Rcpp+OpenMP & Shiny I am on the lookout for a performant way to display 1920x1080 matrices as raster images in the hope of being able to achieve ca. 5-10 fps (calculating the Mandelbrot images themselves now achieves ca. 20-30 fps at moderate zooms, and certainly scrolling around should go fast). Using image() with option useRaster=TRUE, plot.raster or even grid.raster() still doesn't quite cut it, so I am on the lookout for a more performant option, ideally using OpenGL acceleration.

I noticed that there are qt wrapper packages qtutils and qtpaint http://finzi.psych.upenn.edu/R/library/qtutils/html/sceneDevice.html where you can set argument opengl=TRUE and http://finzi.psych.upenn.edu/R/library/qtpaint/html/qplotView.html again with argument opengl=TRUE and http://finzi.psych.upenn.edu/R/library/qtpaint/html/painting.html.

And I also noticed that one should be able to call SDL and GL/OpenGL functions using the rdyncall package (install from https://cran.r-project.org/src/contrib/Archive/rdyncall/ and SDL from https://www.libsdl.org/download-1.2.php)`, demos available at http://hg.dyncall.org/pub/dyncall/bindings/file/87fd9f34eaa0/R/rdyncall/demo/00Index, e.g. http://hg.dyncall.org/pub/dyncall/bindings/file/87fd9f34eaa0/R/rdyncall/demo/randomfield.R).

Am I correct that with these packages one should be able to display a 2D image raster using opengl acceleration? If so, has anyone any thoughts how to do this (I'm asking because I'm not an expert in either qt or SDL/OpenGL)?

Some timings of non-OpenGL options which are too slow for my application:

# some example data & desired colour mapping of [0-1] ranged data matrix library(RColorBrewer) ncol=1080 cols=colorRampPalette(RColorBrewer::brewer.pal(11, "RdYlBu"))(ncol) colfun=colorRamp(RColorBrewer::brewer.pal(11, "RdYlBu")) col = rgb(colfun(seq(0,1, length.out = ncol)), max = 255) mat=matrix(seq(1:1080)/1080,nrow=1920,ncol=1080,byrow=TRUE) mat2rast = function(mat, col) {   idx = findInterval(mat, seq(0, 1, length.out = length(col)))   colors = col[idx]   rastmat = t(matrix(colors, ncol = ncol(mat), nrow = nrow(mat), byrow = TRUE))   class(rastmat) = "raster"   return(rastmat) } system.time(mat2rast(mat, col)) # 0.24s  # plot.raster method - one of the best? par(mar=c(0, 0, 0, 0)) system.time(plot(mat2rast(mat, col), asp=NA)) # 0.26s  # grid graphics - tie with plot.raster? library(grid) system.time(grid.raster(mat2rast(mat, col),interpolate=FALSE)) # 0.28s  # base R image() par(mar=c(0, 0, 0, 0)) system.time(image(mat,axes=FALSE,useRaster=TRUE,col=cols)) # 0.74s # note Y is flipped to compared to 2 options above - but not so important as I can fill matrix the way I want  # magick - browser viewer, so no good.... # library(magick) # image_read(mat2rast(mat, col))  # imager - doesn't plot in base R graphics device, so this one won't work together with Shiny # If you wouldn't have to press ESC to return control to R this # might have some potential though... library(imager) display(as.cimg(mat2rast(mat, col)))  # ggplot2 - just for the record... df=expand.grid(y=1:1080,x=1:1920) df$z=seq(1,1080)/1080 library(ggplot2) system.time({q <- qplot(data=df,x=x,y=y,fill=z,geom="raster") +                  scale_x_continuous(expand = c(0,0)) +                  scale_y_continuous(expand = c(0,0)) +                 scale_fill_gradientn(colours = cols) +                  theme_void() + theme(legend.position="none"); print(q)}) # 11s  

0 Answers

Read More

Sunday, December 3, 2017

Xcode - Mac App - Bootstrap check in error on launch

Leave a Comment

I'm creating a C++ Mac app with Xcode. I've done this before without any problems but I started a new project a few weeks ago and this one has problems.

Message on launch

When I launch the app, this message appears in the console after calling SDL_GL_CreateContext

bootstrap_check_in():  (os/kern) unknown error code (44c) 

I've never seen this before and I don't know what it means. The app still launches though.

Opening popups

osascript no longer works. When this command is invoked,

osascript  -e 'try' -e 'POSIX path of ( choose file name with prompt "Save screenshot" default name "Screenshot.png" )' -e 'on error number -128' -e 'end try' 

this message appears in the console:

2017-11-25 10:50:19.837159+1030 osascript[7910:487965] +[NSXPCSharedListener endpointForReply:withListenerName:]: an error occurred while attempting to obtain endpoint for listener 'com.apple.view-bridge': Connection interrupted 2017-11-25 10:50:19.838056+1030 osascript[7910:487963] *** Assertion failure in +[NSXPCSharedListener connectionForListenerNamed:fromServiceNamed:], /BuildRoot/Library/Caches/com.apple.xbs/Sources/ViewBridge/ViewBridge-341.1/NSXPCSharedListener.m:421 2017-11-25 10:50:19.838724+1030 osascript[7910:487963] *** Assertion failure in -[NSVBSavePanel viewWillInvalidate:], /BuildRoot/Library/Caches/com.apple.xbs/Sources/AppKit/AppKit-1561.10.101/Nav.subproj/OpenAndSavePanelRemote/NSVBOpenAndSavePanels.m:387 2017-11-25 10:50:19.879032+1030 osascript[7910:487963] -[NSVBSavePanel init] caught non-fatal NSInternalInconsistencyException 'bridge absent' with backtrace 

I've excluded the stack trace. Comment if you want me to include the stack trace.

Random message

Sometimes another message appears in the console.

2017-11-26 11:09:14.994459+1030 Buttons[28532:1663094] [User Defaults] Couldn't read values in CFPrefsPlistSource<0x6000000e6b80> (Domain: com.apple.PowerManagement, User: kCFPreferencesAnyUser, ByHost: Yes, Container: (null), Contents Need Refresh: Yes): accessing preferences outside an application's container requires user-preference-read or file-read-data sandbox access, detaching from cfprefsd 

Paths

Calls to SDL_GetPrefPath yield different paths in this application.

This application

SDL_GetPrefPath("company", "my app") -> "/Users/indikernick/Library/Containers/company.my-app/Data/Library/Application Support/company/my app/" 

Another application that isn't broken

SDL_GetPrefPath("company", "my app") -> "/Users/indikernick/Library/Application Support/company/my app/" 

That's all

I'm pretty sure all of these problems are related. The project is on Github so if you've seen this problem before you can check the project settings. If it matters, I'm using Xcode 9.0 and MacOS 10.13.1

Thank you in advance for any assistance.

1 Answers

Answers 1

It looks like your application is (somehow) sandboxed. This is seen from the path returned by SDL_GetPrefPath, which starts with ~/Library/Containers, and from the 'random' message, which clearly states:

accessing preferences outside an application's container requires user-preference-read or file-read-data sandbox access

The popup error message is also very suspect: it looks like your app is not entitled to access some system resource.

You should check in xcode if sandboxing is enabled for your app (i.e. check if a property list file called .entitlements is shown in the project navigator).

More on sandboxing: https://developer.apple.com/library/content/documentation/Security/Conceptual/AppSandboxDesignGuide/AppSandboxInDepth/AppSandboxInDepth.html

Read More

Monday, October 30, 2017

GLSL: Can I combine MRT, ssbo and imageAtomic operations in the same shader (pass)?

Leave a Comment

A 2-pass rendering system in OpenGL is using an MRT shader that is bound to 2 framebuffer textures tex1 and tex2. The goal of the mrt pass is to compute the overdraw in the scene and render it out in a gather pass. I use the framebuffer textures to pass on the result.

It also has a working ssbo buffer that is quite large (and using a fixed screen resolution) and takes ages to link, but I can use it to do atomicAdds. What I am trying to accomplish is to replace this with imageAtomicAdd operations on a uiimage2D, just like with the mrt passes.

The problem is that the result of imageAtomicAdd is always zero, where I expect it to go up just like atomicAdd does at that point..

#version 440 core  layout(early_fragment_tests) in;  // this works fine layout (location = 0) out vec4 tex1; layout (location = 1) out vec4 tex2;  // this works fine layout(std430, binding = 3) buffer ssbo_data {         uint v[1024*768]; };  // this does not work at all. uniform volatile layout(r32ui) uimage2D imgCounter;  out vec4 frag_colour;  void main ()  {            ivec2 coords = ivec2(gl_FragCoord.xy);     uint addValue = 1u;      uint countOverdraw1 = atomicAdd(v[coords.x + coords.y * 1024], 1u);     uint countOverdraw2 = imageAtomicAdd(imgCounter, ivec2(0,0), 1u);       memoryBarrier();      // supports 256 levels of overdraw..     float overdrawDepth = 256.0;     vec3 c1 = vec3(float(countOverdraw1+1)/overdrawDepth ,0,1);     vec3 c2 = vec3(float(countOverdraw2+1)/overdrawDepth ,0,1);      tex1 = vec4(c1,1);       tex2 = vec4(c2,1);       frag_colour = vec4(1,1,1,1); } 

From the khronos website on image atomic operations I gather that..

Atomic operations to any texel that is outside of the boundaries of the bound image will return 0 and do nothing.

.. but the coordinate ivec2(0,0) would be well within the bounds of the texture size (1024 x 768).

Maybe the texture is not set up correctly? This is how I construct the uiimage2D (pieced together from the pipeline flow):

EDIT: I update the code as per suggestion by the answer from Nicol Bolas: texture parameters are set instead of sampler parameters

char data[1024*768*4]; glGenTextures(1, &m_Handle);  m_Target = GL_TEXTURE_2D;  glActiveTexture(GL_TEXTURE0+6); glBindTexture(m_Target,m_Handle);  // updated : a sampler object was bound to the texture, but is now removed  glTexParameteri(m_Target, GL_TEXTURE_MAG_FILTER, GL_NEAREST);  // updated glTexParameteri(m_Target, GL_TEXTURE_MIN_FILTER, GL_NEAREST);  // updated glTexParameteri(m_Target, GL_TEXTURE_WRAP_R, GL_REPEAT);       // updated glTexParameteri(m_Target, GL_TEXTURE_WRAP_S, GL_REPEAT);       // updated glTexParameteri(m_Target, GL_TEXTURE_WRAP_T, GL_REPEAT);       // updated glTexImage2D(m_Target, 0, R32UI, 1024, 768, 0, GL_RED_INTEGER, GL_UNSIGNED_INT,  data);      

If I run it through gDEBugger GL, I see that "Texture data is not available at this time" and while the 'Texture 4' parameters of the texture are filled in and correct, none of the 'Texture Parameters' and 'Level 0 parameters' are shown (N/A). Trapping the debugger at that point throws a whole number of problems that do not appear outside of gDEBugger. Here are the first few:

GL_INVALID_OPERATION error generated. The required buffer is missing. GL_INVALID_ENUM error generated. <pname> requires feature(s) disabled in the current profile.  GL_INVALID_OPERATION error generated. <index> exceeds the maximum number of supported texture units.  GL_INVALID_ENUM error generated. or require feature(s) disabled in the current profile.   GL_INVALID_OPERATION error generated. Can't mix integer and non-integer data ... 

I'm explicitly forcing GL 4.4 or GL4.4 'core' profile so I'm a bit puzzled what may be the problem with the required buffer that is missing. Could it be that it is mistakingly seeing the imgCounter as part of the MRT setup for the framebuffer?

1 Answers

Answers 1

That texture is incomplete.

See, when you bind a texture for use with image load/store operations, you don't bind a sampler along with it. So all of those glSamplerParameter calls are meaningless to the texture's completeness status.

The texture is incomplete because the filtering parameters are GL_LINEAR, but the texture is an unsigned integer format. When creating integer format textures, you should always set the texture's parameters to valid values.

Read More

Saturday, March 4, 2017

How do I apply different textures to multiple polygons in LWJGL?

Leave a Comment

I am drawing a grid of hexagon tiles. I have six different images that I want to use as the fill for these tiles. I use my hexagon class to loop through each point in the tile. The textures are stored in an array list which is then shuffled into a random order. The problem with my script right now is that the same texture is applied to each tile. What am I doing wrong?

public class LWJGLHelloWorld {      public static int SCREEN_WIDTH;     public static int SCREEN_HEIGHT;     public static int WINDOW_WIDTH;     public static int WINDOW_HEIGHT;     public double WIDTH;     public double HEIGHT;     public ArrayList<Hexagon> hexagons = new ArrayList<Hexagon>();     public ArrayList<String> resources = new ArrayList<String>();     public Texture brick;     public Texture stone;     public Texture lumber;     public Texture wool;     public Texture wheat;     public Texture wasteland;      private static enum State {         INTRO, MAIN_MENU, GAME;     }      private State state = State.INTRO;      public LWJGLHelloWorld(){          Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();         double SCREEN_WIDTH = screenSize.getWidth();         double SCREEN_HEIGHT = screenSize.getHeight();         double WIDTH = SCREEN_WIDTH * .85;         double HEIGHT = SCREEN_HEIGHT * .85;          try {             Display.setDisplayMode(new DisplayMode((int)WIDTH, (int)HEIGHT));             Display.setTitle("Hello, LWJGL!");;             Display.create();         } catch (LWJGLException e){             e.printStackTrace();         }         resetResources();           brick = loadTexture("brick");         stone = loadTexture("stone");         lumber = loadTexture("lumber");         //Texture wheat = loadTexture("wheat");         wool = loadTexture("wool");         wasteland = loadTexture("wasteland");          glMatrixMode(GL_PROJECTION);         glLoadIdentity();         glOrtho(0, WIDTH, HEIGHT, 0, 1, -1);         glMatrixMode(GL_MODELVIEW);         glEnable(GL_TEXTURE_2D);          int originX = (int)(Display.getDisplayMode().getWidth() / 2);         int originY = (int)(Display.getDisplayMode().getHeight() / 2);         int radius = (int)(HEIGHT * .1);         int padding = (int)(HEIGHT * .005);          findHexCoords(originX, originY, 5, radius, padding);          while(!Display.isCloseRequested()){             glClear(GL_COLOR_BUFFER_BIT);             for(int h = 0; h < hexagons.size(); h++){                 String rsrc = resources.get(h);                 bindTexture(rsrc);                 glBegin(GL_POLYGON);                 Hexagon hex = hexagons.get(h);                 for(int p = 0; p < hex.points.length; p++){                     Point point = hex.points[p];                     glTexCoord2f(point.x, point.y);                     glVertex2f(point.x, point.y);                 }                 glEnd();             }              Display.update();             Display.sync(60);         }          Display.destroy();     }      private void bindTexture(String rsrc){         switch(rsrc){         case "brick":             brick.bind();             break;         case "stone":             stone.bind();             break;         case "lumber":             lumber.bind();             break;         case "wheat":             //wheat.bind();             break;         case "wool":             wool.bind();             break;         case "wasteland":             wasteland.bind();             break;           }     }      private void findHexCoords(int x, int y, int size, int radius, int padding) {          Point origin = new Point(x, y);         double ang30 = Math.toRadians(30);         double xOff = Math.cos(ang30) * (radius + padding);         double yOff = Math.sin(ang30) * (radius + padding);         int half = size / 2;          int i = 0;         for (int row = 0; row < size; row++) {              int cols = size - Math.abs(row - half);              for (int col = 0; col < cols; col++) {                  int xLbl = row < half ? col - row : col - half;                 int yLbl = row - half;                 int centerX = (int) (origin.x + xOff * (col * 2 + 1 - cols));                 int centerY = (int) (origin.y + yOff * (row - half) * 3);                  Hexagon hex = new Hexagon(centerX, centerY, radius);                 System.out.println(centerX+","+centerY);                 hexagons.add(hex);                 i++;             }         }     }                     private Texture loadTexture(String key){         try {             return TextureLoader.getTexture("PNG", new FileInputStream(new File("img/" + key + ".png")));         } catch (FileNotFoundException e) {             e.printStackTrace();         } catch (IOException e) {             e.printStackTrace();         }         return null;     }     public static void main(String[] args) {         new LWJGLHelloWorld();      }      public void resetResources(){         resources.clear();         resources.add("Brick");         resources.add("Brick");         resources.add("Brick");         resources.add("Wool");         resources.add("Wool");         resources.add("Wool");         resources.add("Wool");         resources.add("Lumber");         resources.add("Lumber");         resources.add("Lumber");         resources.add("Lumber");         resources.add("Stone");         resources.add("Stone");         resources.add("Stone");         resources.add("Wheat");         resources.add("Wheat");         resources.add("Wheat");         resources.add("Wheat");         long seed = System.nanoTime();         Collections.shuffle(resources, new Random(seed));         int randomIndex = ThreadLocalRandom.current().nextInt(0, 19);         resources.add(randomIndex, "Wasteland");         for(int r = 0; r < resources.size(); r++){             System.out.println(resources.get(r));         }     } 

1 Answers

Answers 1

The first letter of the strings that you are adding in resources is uppercase (e.g. Brick).

In the bindTexture switch you are searching strings from resources that have a lowercase first letter (e.g. brick). Therefore the switch should always fail and not able to bind the correct texture.

Either fix the switch or the resources array accordingly.

Read More

Monday, June 13, 2016

Coloring heightmap faces instead of vertices

Leave a Comment

I'm trying to create a heightmap colored by face, instead of vertex. For example, this is what I currently have:

My terrain, by vertex But this is what I want: Per face coloring

I read that I have to split each vertex into multiple vertices, then index each separately for the triangles. I also know that blender has a function like this for its models (split vertices, or something?), but I'm not sure what kind of algorithm I would follow for this. This would be the last resort, because multiplying the amount of vertices in the mesh for no reason other than color doesn't seem efficient.

I also discovered something called flatshading (using the flat qualifier on the pixel color in the shaders), but it seems to only draw squares instead of triangles. Is there a way to make it shade triangles?

Flatshaded

For reference, this is my current heightmap generation code:

public class HeightMap extends GameModel {  private static final float START_X = -0.5f; private static final float START_Z = -0.5f; private static final float REFLECTANCE = .1f;  public HeightMap(float minY, float maxY, float persistence, int width, int height, float spikeness) {     super(createMesh(minY, maxY, persistence, width, height, spikeness), REFLECTANCE); }  protected static Mesh createMesh(final float minY, final float maxY, final float persistence, final int width,         final int height, float spikeness) {     SimplexNoise noise = new SimplexNoise(128, persistence, 2);// Utils.getRandom().nextInt());      float xStep = Math.abs(START_X * 2) / (width - 1);     float zStep = Math.abs(START_Z * 2) / (height - 1);      List<Float> positions = new ArrayList<>();     List<Integer> indices = new ArrayList<>();      for (int z = 0; z < height; z++) {         for (int x = 0; x < width; x++) {             // scale from [-1, 1] to [minY, maxY]             float heightY = (float) ((noise.getNoise(x * xStep * spikeness, z * zStep * spikeness) + 1f) / 2                     * (maxY - minY) + minY);              positions.add(START_X + x * xStep);             positions.add(heightY);             positions.add(START_Z + z * zStep);              // Create indices             if (x < width - 1 && z < height - 1) {                 int leftTop = z * width + x;                 int leftBottom = (z + 1) * width + x;                 int rightBottom = (z + 1) * width + x + 1;                 int rightTop = z * width + x + 1;                  indices.add(leftTop);                 indices.add(leftBottom);                 indices.add(rightTop);                  indices.add(rightTop);                 indices.add(leftBottom);                 indices.add(rightBottom);             }         }     }      float[] verticesArr = Utils.listToArray(positions);     Color c = new Color(147, 105, 59);     float[] colorArr = new float[positions.size()];     for (int i = 0; i < colorArr.length; i += 3) {         float brightness = (Utils.getRandom().nextFloat() - 0.5f) * 0.5f;         colorArr[i] = (float) c.getRed() / 255f + brightness;         colorArr[i + 1] = (float) c.getGreen() / 255f + brightness;         colorArr[i + 2] = (float) c.getBlue() / 255f + brightness;     }     int[] indicesArr = indices.stream().mapToInt((i) -> i).toArray();      float[] normalArr = calcNormals(verticesArr, width, height);      return new Mesh(verticesArr, colorArr, normalArr, indicesArr); }  private static float[] calcNormals(float[] posArr, int width, int height) {     Vector3f v0 = new Vector3f();     Vector3f v1 = new Vector3f();     Vector3f v2 = new Vector3f();     Vector3f v3 = new Vector3f();     Vector3f v4 = new Vector3f();     Vector3f v12 = new Vector3f();     Vector3f v23 = new Vector3f();     Vector3f v34 = new Vector3f();     Vector3f v41 = new Vector3f();     List<Float> normals = new ArrayList<>();     Vector3f normal = new Vector3f();     for (int row = 0; row < height; row++) {         for (int col = 0; col < width; col++) {             if (row > 0 && row < height - 1 && col > 0 && col < width - 1) {                 int i0 = row * width * 3 + col * 3;                 v0.x = posArr[i0];                 v0.y = posArr[i0 + 1];                 v0.z = posArr[i0 + 2];                  int i1 = row * width * 3 + (col - 1) * 3;                 v1.x = posArr[i1];                 v1.y = posArr[i1 + 1];                 v1.z = posArr[i1 + 2];                 v1 = v1.sub(v0);                  int i2 = (row + 1) * width * 3 + col * 3;                 v2.x = posArr[i2];                 v2.y = posArr[i2 + 1];                 v2.z = posArr[i2 + 2];                 v2 = v2.sub(v0);                  int i3 = (row) * width * 3 + (col + 1) * 3;                 v3.x = posArr[i3];                 v3.y = posArr[i3 + 1];                 v3.z = posArr[i3 + 2];                 v3 = v3.sub(v0);                  int i4 = (row - 1) * width * 3 + col * 3;                 v4.x = posArr[i4];                 v4.y = posArr[i4 + 1];                 v4.z = posArr[i4 + 2];                 v4 = v4.sub(v0);                  v1.cross(v2, v12);                 v12.normalize();                  v2.cross(v3, v23);                 v23.normalize();                  v3.cross(v4, v34);                 v34.normalize();                  v4.cross(v1, v41);                 v41.normalize();                  normal = v12.add(v23).add(v34).add(v41);                 normal.normalize();             } else {                 normal.x = 0;                 normal.y = 1;                 normal.z = 0;             }             normal.normalize();             normals.add(normal.x);             normals.add(normal.y);             normals.add(normal.z);         }     }     return Utils.listToArray(normals); }  } 

Edit

I've tried doing a couple things. I tried rearranging the indices with flat shading, but that didn't give me the look I wanted. I tried using a uniform vec3 colors and indexing it with gl_VertexID or gl_InstanceID (I'm not entirely sure the difference), but I couldn't get the arrays to compile. Here is the github repo, by the way.

1 Answers

Answers 1

flat qualified fragment shader inputs will receive the same value for the same primitive. In your case, a triangle.

Of course, a triangle is composed of 3 vertices. And if the vertex shaders output 3 different values, how does the fragment shader know which value to get?

This comes down to what is called the "provoking vertex." When you render, you specify a particular primitive to use in your glDraw* call (GL_TRIANGLE_STRIP, GL_TRIANGLES, etc). These primitive types will generate a number of base primitives (ie: single triangle), based on how many vertices you provided.

When a base primitive is generated, one of the vertices in that base primitive is said to be the "provoking vertex". It is that vertex's data that is used for all flat parameters.

The reason you're seeing what you are seeing is because the two adjacent triangles just happen to be using the same provoking vertex. Your mesh is smooth, so two adjacent triangles share 2 vertices. Your mesh generation just so happens to be generating a mesh such that the provoking vertex for each triangle is shared between them. Which means that the two triangles will get the same flat value.

You will need to adjust your index list or otherwise alter your mesh generation so that this doesn't happen. Or you can just divide your mesh into individual triangles; that's probably much easier.

Read More

Monday, May 2, 2016

OpenGL exponential shadow mapping artifact

Leave a Comment

I'm trying to implement exponential shadow mapping (ESM) into my rendering engine but I'm facing some problems: I can get to see some shadows only if the exponential multiplier is less than 0, and the image becomes very dark.

I was already using Variance Shadow Maps, so I adapted my code to use ESM.

I've changed the fragment shader used to compute the shadow map like this:

#version 150 core  uniform float Exponential;    // Exponential multiplier term of the ESM equation  out float FragColor;  float map_01(float x, float v0, float v1) {     return (x - v0) / (v1 - v0); }  void main() {     float depthDivisor = (1.0 / gl_FragCoord.z);     float mappedDivisor = map_01(depthDivisor, 0.1f, 20.0f);    // Since my light is a directional light I force the near and far planes to 0.1 and 20.0     FragColor = exp(Exponential * mappedDivisor);      // VSM  float depth = gl_FragCoord.z;     // VSM  float dx = dFdx(depth);     // VSM  float dy = dFdy(depth);     // VSM  float moment2 = depth * depth + 0.25 * (dx * dx + dy * dy);     // VSM  FragColor = vec4(depth, moment2, 0.0, 1.0); } 

I've kept the same vertex shader for the shadow map:

in vec3 _position;  uniform mat4 ProjectionViewMatrix;   // Projection-View matrix of the light  layout(std140) uniform MOD {     mat4 AModelMatrix[20]; };  void main() {     gl_Position = ProjectionViewMatrix * AModelMatrix[gl_InstanceID] * vec4(_position, 1.0); } 

And I've changed the my directional light fragment shader to become like this:

uniform mat4 InverseProjectionViewMatrix; // Inverse of the camera projection-view matrix uniform float Exponential;                // Exponential multiplier term of the ESM equation (same as shadow map fragment shader) uniform mat4 ProjectionViewMatrix;        // Light projection view matrix  void main() {      vec3 genPos    = vec3((gl_FragCoord.x * InverseScreenSize.x), (gl_FragCoord.y * InverseScreenSize.y), 0.0f);      genPos.z       = texture(PrepassBuffer_DepthMap, genPos.xy).r;       vec4 clip          = InverseProjectionViewMatrix * vec4(genPos * 2.0f - 1.0f, 1.0f);      vec3 pos           = clip.xyz / clip.w;       vec4 shadowCoord   = ProjectionViewMatrix * vec4(pos, 1.0f);      shadowCoord        /= shadowCoord.w;      shadowCoord.xyz    = shadowCoord.xyz * vec3(0.5f, 0.5f, 0.5f) + vec3(0.5f, 0.5f, 0.5f);       float occluder = texture(DILShadowMap, shadowCoord.xy).r;      float reciever = map_01(shadowCoord.z, 0.1f, 20.0f);      float shadowAmount = saturate(occluder * exp(-Exponential * reciever));      ... } 

A couple of images to show the problem.

This is the result I get with Exponential set to 1.0 (no shadows at all):

This is the result I get with Exponential set to -30.0 (starting to see some shadows):

This is the result I get with the exact same coputations of the position (same vertex shaders basically) but with the variance shadow maps equation:

The only things I've changed from the algorithm I've found are these two lines in the shadow map fragment shader and in the directional light fragment shader:

// Shadow map fragment shader float depthDivisor = (1.0 / gl_FragCoord.z); // <- changed gl_FragCoord.z (originally was gl_FragCoord.w)  // Directional light fragment shader float reciever = map_01(shadowCoord.z, 0.1f, 20.0f); // <- changed shadowCoord.z (originally was shadowCoord.w) 

I've made these two changes because the W component was always 1.0, and it didn't seem correct to me to use W when the depth is stored into the Z component.

0 Answers

Read More

Monday, April 25, 2016

Specifying texture coordinates using tesselation in GLU

Leave a Comment

When mapping textures to surfaces in OpenGL using the standard polygon method, you can do the following:

size = 10 glBegin(GL_POLYGON)  glNormal3f(0.0, 0.0,-1.0) glTexCoord2f(0.0, 0.0); glVertex3f(0.0, 0.0, 0.0); glTexCoord2f(size, 0.0); glVertex3f(0.0, size, 0.0); glTexCoord2f(size, size); glVertex3f(0.0, size, size); glTexCoord2f(0.0, size); glVertex3f(0.0, 0.0, size);  glEnd() 

However, I am using tesselation to render my surfaces, so my code looks like this:

gluTessBeginPolygon(self.tessellator, None) gluTessBeginContour(self.tessellator)  for vertex in vertices:     gluTessVertex(self.tessellator, vertex, vertex)  gluTessEndContour(self.tessellator) gluTessEndPolygon(self.tessellator) 

Is there a function for gluTess that can be used to specify texture coordinates, like the glTexCoord2f function available for polygons that do not use tesselation?

Without specifying the texture coordinates, it seems like the color of the first pixel of the texture is chosen and then displayed over the entire surface, rather than actually displaying the texture.

0 Answers

Read More