Open Inventor Technical FAQ


(Copyright 2000-2002, TGS Inc., All Rights Reserved)

  General Reference Info  

  1. How do I get Open Inventor to do flat shading?

    The creaseAngle field of the SoShapeHints node is one factor in whether one gets flat shading or not. The crease angle is the angle between the normals of two adjacent faces. If the angle between the normals is greater the defined creaseAngle then Inventor computes separate normals for each face and the object appears faceted.

    • To get flat shading,
      • Specify a MaterialBinding of PER_FACE (see SoMaterialBinding)
      • Specify one color per face (see SoMaterial, SoVertexProperty)
      • Specify a creaseAngle of zero (see SoShapeHints)

    • To get Gouraud (smooth) shading, set the creaseAngle to pi.

    The default crease angle is 0.5.

    A material binding of PER_VERTEX_INDEXED allows a different color at each vertex, but shared vertices between faces will have the same color. If the angle between faces is less than the crease angle and Open Inventor is computing the normals, the faces will share the same normal, and the faces will be smooth shaded.

    Note the "note" of page 120 of the Inventor Mentor: A shading model is not explicitly specified: shading is dictated by a combination of

    1. the material specification
    2. the lighting model (Phong by default), and
    3. the normal bindings
    See also pp. 130-132.

  2. Back | Top

  3. How can I change the appearance of a manipulator? For example, I am using an SoHandleBoxDragger in my app and I only want it to scale and translate in the X and Y directions. I also want to get rid of the extruders in the middle of the box. How can I restrict the dragger to the xy direction and also to get rid of the unwanted extruders?

    One approach is to modify the geometry file for the dragger (found in the data\draggerDefaults folder of the install) so the unwanted part is replaced by an empty separator. This is discussed on pages 417 and 418 of the Inventor Mentor book.

    Programmatically one can accomplish the same thing by using the setPart method of the dragger to set any part listed in the Helpfile description of the dragger to an empty Separator. Note that you must pass an empty Separator, as passing NULL will cause your program to crash.


    Back | Top

  4. How do I do a background image? I want a background image to be fixed in a scene with all graphics rendered on top of it. For example, Mona Lisa as a background, with spinning cubes on top.

    We have an example program that shows the use of a custom node that does this. It is posted on the TGS ftp site:

    ftp://ftp.tgs.com/private/FAQ/bkgrndimage.zip
    Look in the source code of the custom node to see what options are available.

  5. Back | Top

  6. For Large Model Viewing (especially SoSimplify Action), which geometry nodes get decimated by SoDecimator? Which don't?

    Some geometry nodes can be decimated, and others can't depending on the algorithm used by the decimator. Here is the list of geometry nodes that can be decimated:

    • SoFaceSet
    • SoIndexedFaceSet
    • SoIndexedNurbsSurface
    • SoIndexedTriangleStripSet
    • SoNurbsSurface
    • SoTriangleStripSet
    • SoVRMLIndexedFaceSet

    Here is the list of geometry nodes that cannot be decimated:

    • SoAnnotText3
    • SoAsciiText
    • SoCone
    • SoCube
    • SoCylinder
    • SoIndexedLineSet
    • SoIndexedNurbsCurve
    • SoLineSet
    • SoNurbsCurve
    • SoPointSet
    • SoQuadMesh
    • SoSphere
    • SoText2
    • SoText3
    • SoVRMLBox
    • SoVRMLCone
    • SoVRMLElevationGrid
    • SoVRMLExtrusion
    • SoVRMLIndexedLineSet
    • SoVRMLPointSet
    • SoVRMLSphere
    • SoVRMLText
    • Any geometry under an SoLOD
    • Any geometry under an SoLevelOfDetail
    • Any geometry under an SoLevelOfSimplification
    • Any geometry under an SoVRMLLOD

    The decimator does not decimate nodes, such as SoCone, that adjust their complexity automatically (based on SoComplexity and/or the viewer-requested decimation level). The decimator also does not decimate nodes such as SoVRMLElevationGrid where decimation would not preserve the type of geometry. Decimated geometry is always an indexed face set.


    Back | Top

  7. Does TGS have plans to implement the AUTO mode of SoLocateHighlight class on Windows?

    SoLocateHighlight's highlighting behavior is implemented in OIV 2.5.2 (although it is not mentioned in the release notes).

    Regarding the default value of the "mode" field, there is a difference between SGI's Open Inventor 2.1.x and TGS's Open Inventor.

    • In SGI's Open Inventor 2.1.x, the default value is "AUTO".

    • In TGS's Open Inventor 2.5.x, the default value is "OFF", unless the environment variable IV_ALLOW_LOCATE_HIGHLIGHT is defined. If defined, the default value is "AUTO".

    This change was made to avoid what appears to be a performance problem with Open Inventor when viewing files that contain this node. The performance problem is the result of SoLocateHighlight disabling caching above itself. In most cases we observed that the highlighting was only meaningful in the original application that created the file. Since a viewer does not know the significance of the objects in the scene, e.g. what to do if one is clicked, we believe it is more important to have performance than this highlighting.


    Back | Top

  8. Does Open Inventor support Boolean operations? For example, how can I create a model of a cube with a cylindrical hole, something like myGraph = Cube - Cylinder?

    Open Inventor 2.6 and earlier does not support Boolean operations on geometry, so the easiest way to to create such an object is to use a modeller program such as Amapi 3D from TGS which does support such Boolean operations and which exports to Open Inventor (.iv) format or VRML 1.0 or VRML97 (.wrl) formats. Of course, another way is create your own object using face sets and coordinates.

    Starting with Open Inventor 3.0, the SolidViz extension (a separately licensed Open Inventor toolkit extension) is available. It provides solid modeling nodes for describing 3D shapes as combinations of other shapes using Boolean operations such as merge, intersection, and subtraction.

    Please note: SolidViz implements a visual simulation of CSG (Constructive Solid Geometry) shapes. A visual simulation of a CSG shape can be achieved in many cases at interactive speed by a special rendering method taking advantage of accelerated OpenGL rendering and the stencil buffer. See the Open Inventor help file for additional info.


    Back | Top

  9. I am using SoCoordinate3 and SoFaceSet to represent some polygons. When the draw style is "wireframe", the tessellation of the polygons appears and is not very pretty. Wireframe mode should show the only the edges of each polygon. Would using an SoVertexProperty node solve this problem?

    This is a well known "feature" of Open Inventor. Using an SoVertexProperty node would not produce a different result.

    The effect occurs because Inventor uses the same data for both Filled and Wireframe, just changing the OpenGL polygon mode. This is more efficient because Inventor does not have to recreate cached display lists and also does not have to include additional code in the primitive nodes to generate GL_LINES instead of GL_TRIANGLES.

    It would be a lot of work to change Inventor's behavior.

    The best thing we can recommend is to use a Switch node with an IndexedFaceSet and an IndexedLineSet. We think the actual coordinate data, which is the bulk of the memory usage, could be shared.


    Back | Top

  10. What is a quaternion and do I care?

    If you don't know what a quaternion is, you almost certainly don't care.

    EXCEPT... that they are used in a few places in Open Inventor (in SbRotation, SoMFRotation, and SoSFRotation) -- often by people who don't intend to use them ;-)

    If you specify a rotation with four arguments, for example,

    SbRotation(float q0, float q1, float q2, float q3)

    you are specifying a rotation as a quaternion which is NOT the same as specifying a direction and a rotation angle.

    To specify a direction and an angle, you would use the following call:

    SbRotation(const SbVec3f &axis, float radians)

    If you are interested in learning more about quaternions, see the "Open Inventor Useful Links" page on the TGS web site for some references on the theory and practice of quaternions.


    Back | Top

  11. How does Open Inventor convert a quaternion into a rotation matrix?

    Here is the algorithm:

    
    void SbRotation::getValue(SbMatrix &matrix) const
    {
      SbMat m;
      m[0][0] = 1 - 2.0 * (quat[1] * quat[1] + quat[2] * quat[2]);
      m[0][1] =     2.0 * (quat[0] * quat[1] + quat[2] * quat[3]);
      m[0][2] =     2.0 * (quat[2] * quat[0] - quat[1] * quat[3]);
      m[0][3] = 0.0;
    

    m[1][0] = 2.0 * (quat[0] * quat[1] - quat[2] * quat[3]); m[1][1] = 1 - 2.0 * (quat[2] * quat[2] + quat[0] * quat[0]); m[1][2] = 2.0 * (quat[1] * quat[2] + quat[0] * quat[3]); m[1][3] = 0.0;
    m[2][0] = 2.0 * (quat[2] * quat[0] + quat[1] * quat[3]); m[2][1] = 2.0 * (quat[1] * quat[2] - quat[0] * quat[3]); m[2][2] = 1 - 2.0 * (quat[1] * quat[1] + quat[0] * quat[0]); m[2][3] = 0.0;
    m[3][0] = 0.0; m[3][1] = 0.0; m[3][2] = 0.0; m[3][3] = 1.0;
    matrix.setValue(&m[0][0]); }


  12. Back | Top

  13. Is there any way to suppress fill using SoVRMLIndexedFaceset? I was hoping to achieve a "hidden line" effect using VRML.

    No, there is not. In VRML, face sets are always filled solid and line sets are always "see through". There does not appear to be any reasonable way to specify a "hidden line" effect using the VRML file format. The closest you could come would be to insert the geometry twice -- once as a face set with the fill color set to the background color and once as a line set. Theoretically this would give the desired result, but in practice it is likely to produce lots of z-buffer artifacts ("stitching").

    Any hidden line effect would have to be an option in the viewer. We do not know of any VRML viewers with this option, but all of our Inventor-based viewers have this option. Note however that only a face set can "hide" lines. Line sets are "see through" by definition.


    Back | Top

  14. Which VRML 2.0 (now officially VRML97) nodes does Open Inventor support?

    With almost every new release of Open Inventor, TGS upgrades its VRML support. Go to the Table of Contents of the Open Inventor on-line Reference Manual. Select Modules/Inventor/VRML/VRML2.0 nodes to go to the VRML 2.0 page which lists all of the VRML nodes. Some nodes are marked with "Action not implemented" or "Action partially implemented."

    If a node is marked "Action not implemented", Open Inventor can read and write VRML files containing that node. However, when it reads that node, it will not perform any action.

    If a node is marked "Action partially implemented", see the help file to find out which features are implemented.

    Unmarked nodes are fully implemented.


    Back | Top

  15. I have read my VRML scene into Open Inventor. Now how do I access the VRML scene to perform actions such as moving objects from one location to another?

    First you should read the Inventor Mentor and the TGS Extensions User Guide to understand the basic concepts of the Open Inventor scene graph. However this might not give you the "big picture" for this specific situation. Here is a mini-tutorial. But please note there are *many* ways to organize a scene graph, based on your application's requirements, so don't assume this is the only possible way! Generally this discussion is applicable to both VRML and Open Inventor files.

    The important question is: In your VRML file, what distinguishs one "object" from another? You need to be able to answer this question in order to manipulate the objects as separate entities.

    It's almost certain that each "object" will be a collection of nodes under some sort of grouping node. For example, in a VRML file:

    DEF Object27 Transform {
      children [
        Cone {}
      ]
    }

    might be one (trivial) object in the incoming file.

    Typically the file creator will assign a name to each grouping node that represents an independent object. For example, the file creator might simply assign them names "Object0000" through "Object0953". Since you are modeling the scene, you should adopt some useful naming convention. The node names will show up as "DEF name" in the output VRML file, as above.

    When you read the file into Open Inventor (from TGS), each node in the file will result in a corresponding node in the scene graph. The grouping node above is an SoVRMLTransform, for example.

    Let's say you wanted to find a particular object programmatically. If you know the objects all have unique names, you can do this:

    SoNode *pNode = SoNode::getByName( "Object51" );

    However this only gives you a pointer to the node, it doesn't tell you anything about where it is in the scene graph. To get that info, use a SearchAction like this (assuming sceneRoot is the root of the entire scene graph, i.e. what SoDB::readAll returned to you):

    SoSearchAction sa;
    sa.setName( "Object43" );
    sa.apply( sceneRoot );
    SoPath *pPath = sa.getPath();
    

    SoNode *pNode = pPath->getTail();

    If you want to be able to select objects by picking, look at the discussion about SoSelection and redrawOnSelectionChange in the Mentor (and other places). Remember though, that picking will select an actual geometry node, so the node at the tail of the selection path will be (for example) an SoVRMLIndexedFaceSet. To get a path to the grouping node that represents the entire object (which is presumably an ancestor of the geometry node), you have two choices. You can implement a pickFilterCallback (see the Mentor for details) or you can "manually" go up the path from the tail looking for a grouping with an appropriate name.

    Now you have the objects identified by some naming scheme and can get a pointer (or path) to a specific object.

    In order to control "object attributes" like position, scale, rotation, material, etc. in a classic Open Inventor scene graph you might put a set of attribute nodes as the first children of each object. For example:

    DEF Object35 Separator {
      Transform {}
      Material {}
      ... other object nodes ...
    }

    These nodes don't have to exist when the file is read in. You can create them when (and if) they are needed, for example, given a pointer to an object we want to rotate:

    // given pNode points to the object's Separator
    SoTransform *pTran;
    SoNode *pFirstChild = pNode->getChild( 0 );
    if (pFirstChild->isOfType(SoTransform::getClassTypeId()))
      pTran = (SoTransform*)pFirstChild;  // Already got one
    else {
      pTran = new SoTransform;        // Make one
      pNode->insertChild( pTran, 0 ); // Put as first child
    }
    pTran->rotation.setValue( SbRotation( ...   // set rotation
    

    There are many variations on this theme, for example giving the attribute nodes meaningful names. You could also use node kits for this purpose since they automatically create nodes (that are part of their catalog) when they are needed.

    The position/rotate/scale part is actually easier with VRML nodes! The grouping node that defines an object will typically be an SoVRMLTransform node. This node is effectively the combination of an SoSeparator and an SoTransform. So given a pointer to an object you want to rotate, you only have to do this:

    SoVRMLTransform *pNode = ... // somehow we got this pointer
    pNode->rotation.setValue( SbRotation( ... // set rotation
    

  16. Back | Top

  17. I was trying to map a 3D point in world coordinates to a 2D point in screen coordinates. I used a function called ProjectToScreen in class SbViewVolume, but could not get the desired results. Do you have an example to do that?

    You are on the right track using the ProjectToScreen method. Usually people want to go the other way -- from 2D screen coordinates to 3D modeling coordinates, and the boxZoom example in the src\Inventor\examples\Ivf folder (Win32 only) is a good example.

    The only example we distribute that uses ProjectToScreen is the ToolMaker example in the src\Inventor\examples\ToolMaker\09.Highlights folder. The relevant code is in the OverlayHL.cxx file and on page 222 of the ToolMaker book, but you don't need the book as you have the source code. The purpose of the code is to render selected objects as 2D rectangles in the overlay planes. The updateBbox method in the code projects the 3D bounding box for the selected object onto the screen and renders it as a 2D rectangle.


    Back | Top

  18. Does Open Inventor support "hidden line removal" mode? I know Open Inventor has HLR mode but when you set it, you see all the triangles. I am interested in having an HLR mode where there are no triangles, just the contours.

    We presume you are referring to the hidden line drawing mode that the viewers support. Yes, you do see the triangles. Open Inventor first draws the polygonal faces in the background color (OpenGL Polygon mode=GL_FILL) then renders the same vertices using OpenGLPolygon mode=GL_LINE. The faces are tessellated into triangles prior to being sent to OpenGL, hence the triangle edges that you see.

    We do not currently have plans to implement the kind of hidden line removal you are describing. However, you can simulate the hidden line removal effect you are seeking as follows:

    In your scene graph, insert the geometry twice -- once as a face set with the fill color set to the background color and once as a line set to define the edges. In order to avoid stitching, you might want to apply a slight offset using SoPolygonOffset.

    Note: It is not necessary to insert the geometry coordinates twice. With a little bit of forethought, you can use the same coordinate node for the face set and the line set -- if you use indexed face sets and line sets.


    Back | Top

  19. I'm having difficulty trying to understand the screenArea parameter of the SoLevelOfDetail node. The Inventor Mentor states that it is in "square pixels", but I'm having trouble relating that to anything else. It does not seem to be related to the getSize of the Render Area from what I have tested, and I don't see how to determine it through any boundingBoxAction calls. How should I be computing the screen area?

    The short answer is to use SoLOD rather than SoLevelOfDetail.

    The long answer follows below:

    SoLevelOfDetail is a deprecated class. We don't recommend its use. The cost of computing the screen area (which involves quite a bit work -- described below) tends to overwhelm the benefits of using SoLevelOfDetail. The computation may take more time than it would have taken to just render the geometry.

    We would recommend the use of SoLOD instead because the computation it uses to choose a particular child (distance to the camera) is much simpler and yields the same kind of results: close to the camera, you get a more detailed picture; further back, you get fewer details.

    Here is information on LevelOfDetail from the help file:

    The children of this node typically represent the same object or objects at varying levels of detail, from highest detail to lowest. The size of the objects when projected into the viewport is used to determine which version to use (i.e., which child to traverse).

    The size is computed as the area of the screen rectangle enclosing the projection of the 3D bounding box that encloses all of the children. When rendering, this size is compared to the values in the screenArea field. If the size is greater than the first value, child 0 is traversed. If it is smaller than the first, but greater than the second, child 1 is traversed, and so on. If there are fewer children than are required by this rule, the last child is traversed. The screenArea field contains just 0 by default, so the first child is always traversed.

    The size calculation takes the current complexity into account. If the complexity is 0 or is of type BOUNDING_BOX, the last child is always traversed. If the complexity is less than .5, the computed size is scaled down appropriately to use (possibly) a less detailed representation. If the complexity is greater than .5, the size is scaled up. At complexity 1, the first child is always used.

    Note that the SoLOD node is similar to SoLevelOfDetail, except the switching between levels in the SoLOD node is based on distance from the camera, which is faster than using screen area.

    (Even the help file implicitly recommends SoLOD :-)

    This means that

    1. Open Inventor computes the 3D bounding box of everything under the LevelOfDetail node by applying a GetBoundingBoxAction to the LevelOfDetail's children.
    2. It projects the bounding box onto the screen and finds the smallest enclosing rectangle (in pixels).
    3. It multiplies the width by the height to get the area in square pixels.
    4. This value is compared to the values in the screenArea field. (If complexity is not the default 0.5, there is an additional adjustment to the projected area.)

    The bottom line is that this is a lot of computation. Using an SoLOD node you can get roughly the same effect much more quickly and cheaply.


    Back | Top

  20. We want to render 3D files that have 2 million points (i.e. 200 by 200 by 50) where each point will have a certain 3D coordinate and a value at that coordinate. The coordinates will not be equidistant. Can you do this with Open Inventor for Win32?

    Yes. The main component that Open Inventor provides over OpenGL are viewers that allow the viewing and manipulation of your dataset. You can either have your application read your data and place it in Open Inventor primitives and display the resulting graphics database that is generated (called the scene graph) using a render area or viewer (and optionally write out the scene graph to a file in the Inventor file format, which can be read by the application or other viewer application such SceneViewer, which is supplied with Open Inventor), or you can manually or programmatically create an Inventor file with your data in it which can be read by any Inventor viewer application such as SceneViewer.

    Open Inventor has a PointSet primitive which is an efficient way of displaying point data. Your description of the application implies that you may also need to display text (the data value) at each point. In order to do this efficiently you might want to subclass the PointSet object and create a custom Open Inventor node for displaying your data. The Open Inventor class library is highly extensible for this reason. Custom nodes inherit the powerful built-in methods for displaying, picking, searching, etc., in the scene graph.

    Also note that Open Inventor automatically optimizes its use of the OpenGL library for rendering. For example, when appropriate, Open Inventor will create OpenGL display lists so that rendering can proceed at full speed with (almost) no overhead for traversing the scene graph.


    Back | Top

  21. How can I prevent a callback node from being cached?

    The recommended method is:

    
    #include <Inventor/actions/SoGLRenderAction.h>
    #include <Inventor/elements/SoGLCacheContextElement.h>
    

    void myCallback( void *userData, SoAction *action ) { SoState *state = action->getState(); if (action->isOfType(SoGLRenderAction::getClassTypeId())) { SoGLCacheContextElement::shouldAutoCache(state, SoGLCacheContextElement::DONT_AUTO_CACHE);
    // Other rendering code } }

    The call to "shouldAutoCache" shown above will prevent Open Inventor's automatic render caching mechanism from including this node in a render cache. In other words, no Separator (or other caching node) above this Callback node will be able to create a render cache. You should make sure that any "sibling" geometry that should be cached is under its own Separator.

    If necessary you can use this call:

    SoCacheElement::invalidate( action->getState() );
    

    to completely eliminate the possibility of caching this node.


    Back | Top

  22. [Win32] Is there a way to suppress Open Inventor automatic redraw on Windows when the window including the render area is exposed or de-iconified?

    Sure, more than one. Here are some possibilities:

    • Override Inventor's redraw method and make your own decision about whether to actually do the redraw.
    • In an MFC program, use the scheme illustrated in the gdiRect example to synchronize the Inventor repaint on expose with the app's OnPaint method, but instead of calling Inventor's redraw method, just call ValidateRect (Win32) on the Inventor window to remove its pending WM_PAINT message from the queue.

      The gdiRect example is available from the TGS ftp site:

             ftp://ftp.tgs.com/private/FAQ/gdirect.zip  (25172 bytes)
      

  23. Back | Top

  24. How do I create an array of objects such as SoCone? I tried to do:
         SoCone *cones[]=new[] SoCone;
    
    but it doesn't work!

    Create cones:

    const int numCones = 10;
    SoCone **cones = new SoCone* [numCones];
    for (int i = 0; i < numCones; i++) {
      cones[i] = new SoCone;
      cones[i]->ref();  // So we control lifespan
    }
    

    Clean up cones:

    for (int i = 0; i < numCones; i++) {
      cones[i]->unref();  // destroy
    }
    delete [] cones;
    

    Note that you cannot allocate an array of SoCone objects or of any other reference counted object. The reason is that reference counted objects will "automagically" destroy themselves when their ref count goes to zero. If an object in the middle of your array destroyed itself and tried to free its memory (in the middle of the allocated block of memory), at best your app would crash.


    Back | Top

  25. How do I prevent an SoCallback node from being render cached? I need to make sure my callback is called on every render traversal.

    By default the callback node may become part of a render cache (OpenGL display list). In this case, so long as the cache remains valid, the callback node will not be traversed by the render action. If the callback function always makes the same OpenGL calls, this might be OK since the OpenGL calls will be part of the cache. But if the callback function interacts with the application or performs different actions at different times, then it should disable render caching. For example:

    SoState *state = action->getState();
    SoGLCacheContextElement::shouldAutoCache(state,
    SoGLCacheContextElement::DONT_AUTO_CACHE);
    

  26. Back | Top

  27. What happens if an Open Inventor geometry file contains multiple file headers?

    When Open Inventor reads the file, it processes the first file header as the file header. Subsequent file headers in the file are treated as comments.


    Back | Top

  28. How can I save a scene graph such that it is "self-contained"?

    See also the related FAQ question "Can you give me more info about writing a 'self-contained' file?"

    There are a couple of general approaches to this problem.

    1. You can apply a search action or a callback action to the scene graph for each of the types of nodes that might have an external file reference, and when a node is found, get the filename and add it to your list of files to be saved. Open Inventor does not keep a list of dependent files.
    2. You can create a portable iv file which contains all the information from referenced files. This is a somewhat less efficient way to store the data, however.

      Once the original .iv file has been read, all of the information is in memory. You can tweak the scene graph so that when the scene graph is saved, everything is saved in one file.

      For instance, an SoTexture2 node has the mutually exclusive filename and image fields. If the filename field was used in the original file, then normally when you save the file, the filename field would again be used, and the texture file would be external to the .iv file. However, if the touch method is called on the image field, Open Inventor detects the change and will use the image field when the file is saved, thereby saving the texture in the .iv file. Note that texture images embedded in a .iv file are not compressed. The default ASCII format for images is very verbose.

    Whether making a list of filenames or causing the data to all be saved in the .iv file, the first step is the same -- that is, finding the nodes with the external file references.


    Back | Top

  29. Can you give me more info about writing a "self-contained" file?

    See also the related FAQ question "How can I save a scene graph such that it is 'self-contained'?"

    CAN BE SELF-CONTAINED
    The following nodes have image fields as well as filename fields. If you call the "touch" method on the image field, Open Inventor detects the change and will use the image field when the file is saved. This way the texture (i.e., the contents of the image field) is written to the .iv file.

    • SoImage
    • SoTexture2
    • SoTexture3
    • SoFile -- This node reads children from a named file. In order to make a self-contained file, you will need to search the scene graph, find each SoFile node, query its children, then replace the node with its children.

    ivcat is a utility that can be used to replace SoFile nodes with their children and to expand SoTexture2 nodes to explictly include the texture data rather than a file name. See the Open Inventor help file for information on ivcat.

    CANNOT BE MADE COMPLETELY SELF-CONTAINED
    Some nodes intrinsically get their data from external files. A scene graph that contains any of these nodes cannot be made completely self-contained. Here's a list of them. (As new nodes are added, this list may need to be expanded.)

    • SoVRMLAnchor
    • SoVRMLAudioClip
    • SoVRMLBackground
    • SoVRMLImageTexture
    • SoVRMLInline
    • SoVRMLMovieTexture
    • SoVRMLScript
    • SoWWWAnchor
    • SoWWWInline
    • SoPattern -- The pattern definition is always specified in a separate file or programmatically. SoPattern does not have a field for storing the pattern. Thus an .iv file that uses an SoPattern node will not be fully self-contained -- unless the default pattern (defined in the Open Inventor DLL) is used.
    • SoExtTexture2 -- It has a filename field but no image field. It is not possible to store the image data with the node.


    Back | Top

  30. How can I tell Open Inventor to ignore a particular node? Is there a way to set an ignore flag on a scene node (object) to disregard it, in the same manner that we can ignore a field?

    The brute force way is set the ignore flags of all of the node's fields to TRUE. The more elegant way is to replace the node with a switch node and place the node under it. By "switching" the node out of the scene graph traversal, one essentially removes its effect on traversal.


    Back | Top

  31. How can I customize the behavior of SoWriteAction? For example, how can I prevent certain nodes from being written out?

    The following file on the TGS ftp site:

    ftp://ftp.tgs.com/private/FAQ/writev21action.zip

    contains a customized SoWriteAction that disables writing for nodes added after V2.1. (Note that it does not disable the VRML nodes. However, it would easy to add them following the pattern in the attached code.)

    Also included is a simple "test drive" program.


    Back | Top

  32. How does Open Inventor transform normalized device coordinates (0..1) to pixel coordinates?

    Open Inventor's conversion is the most straightforward one and is based on the "window" size specified to SbViewportRegion (this is the width and height of the window on screen in pixels). So given width and height in pixels:

    int x_pixelVal;
    float x_ndcVal;
    x_pixelVal = (int)(x_ndcVal * width);
    x_ndcVal = (float)x_pixelVal / (float)width;
    

  33. Back | Top

  34. If I set "override" on a Material node, how can I turn off override later in the traversal?

    Once you set override on a Material node, for example:

    pMaterial->setOverride(TRUE);

    the override is in effect for all nodes at or below this level in the scene graph. Simply calling setOverride(FALSE) on a subsequent Material node has no effect because the previous override causes this node to be ignored. However you can turn off override by accessing the SoOverrideElement directly in a callback (see the SoCallback node). Note that some actions do not enable all the traversal state elements, so your callback function should check that the current action is an SoGLRenderAction.

    See SoOverrideElement.h for the methods to disable override on other traversal state elements.

    Example callback function:

    void disableOverrideCallback( void *myData, SoAction *action )
    {
      if (!action->isOfType(SoGLRenderAction::getClassTypeId()))
        return;
    

    SoState *state = action->getState(); SoNode *node = action->getCurPathTail();
    SoOverrideElement::setAmbientColorOverride(state, node, FALSE); SoOverrideElement::setDiffuseColorOverride(state, node, FALSE); SoOverrideElement::setEmissiveColorOverride(state, node, FALSE); SoOverrideElement::setSpecularColorOverride(state, node, FALSE); SoOverrideElement::setShininessOverride(state, node, FALSE); SoOverrideElement::setTransparencyOverride(state, node, FALSE); }


  35. Back | Top

  36. Picking - If my primitives (e.g. SoIndexedFaceSet) contain multiple faces, how can I determine which face was selected?

    First, get the SoPickedPoint object. You can get this directly from an SoRayPickAction. If you use SoSelection, you have to provide a pickFilter callback to get the SoPickedPoint.

    Once you have an SoPickedPoint, call getPath() and look at the typeId of the tail node. That will tell you which subclass of SoDetail will be returned from getDetail() (you have to cast the return value). In this case, an IndexedFaceSet, it would be an SoFaceDetail.

    The getFaceIndex() method tells you which face in the primitive was picked.

    Something like this:

    const SoPickedPoint *pPickedPt = ...
    

    SoPath *pPickPath = pPickedPt->getPath(); SoNode *pTailNode = pPickPath->getTail(); if (pTailNode->isOfType(SoIndexedFaceSet::getClassTypeId())) { const SoFaceDetail *pFaceDetail = (SoFaceDetail*)pPickedPt->getDetail(); int faceIndex = pFaceDetail->getFaceIndex(); }


  37. Back | Top

  38. How do I do a gradient background for my viewer window?

    There is already an example program (available via ftp below) that defines a custom node for doing a textured background image. One solution would to be provide a texture image that is a pre-defined color gradient.

    However, it might be better to start with the supplied example code and modify it so that the OpenGL face set on which the texture is mapped has per-vertex color. Depending on the colors you specify for each of the vertices, you can produce a variety of gradients, including, but not limited to vertical, horizontal, and diagonal.

    The example program is posted on the TGS ftp site:

    ftp://ftp.tgs.com/private/FAQ/bkgrndimage.zip

    Note that if you are drawing a background, you may be able to get a small performance increase by disabling the automatic glClear done by the Open Inventor viewer.


    Back | Top

  39. Is there a way to "capture" the normals computed by Open Inventor and save them for future use?

    Yes! We have a sample (VC++) program showing how to do this. It can be downloaded from:

    ftp://ftp.tgs.com/private/FAQ/getnormals.zip

    Only vertex shapes, classes derived from SoVertexShape, have explicit normals. It is not currently possible to get the normals used for SoCone, SoSphere, etc. In general, there are three cases:

    1. First, normals may have been specified in an SoVertexProperty node directly associated with the vertex shape. Simply get the value of the vertex shape's vertexProperty field.
    2. Second, normals may have been inherited from an SoNormal or SoVertexProperty node in the scene graph. An easy way to get these is to apply an SoCallbackAction, possibly using a path to the vertex shape. In the callback function, use the SoCallbackAction's getNumNormals and getNormal methods.
    3. Third, the normals may have been computed by Open Inventor. If you are using Open Inventor 3.0 or later, call the vertex shape's getNormalCache method. If using an older version, see the example program. Use the SoNormalCache's getNum and getNormal methods. (This class is considered an "extender" tool and is not documented in the Open Inventor help file.)

  40. Back | Top

  41. How can I programmatically take a "screen shot" of an SoExaminer viewer window?

    The Mentor example in %OIVHOME%\src\Inventor\examples\Mentor\Cxx\09.1.Print.cxx does pretty much what you are requesting.

    You will notice one difference between the view you see on the display and the view you get in the output file -- the background color. As noted in the comments in the code, a white background is used instead of black -- it uses up less ink or toner :-)


    Back | Top

  42. How can I get SoExtSelection to work in "additive" mode?

    BACKGROUND INFO:
    Additive mode means that if you select an object using SoExtSelection and then you select it again, it doesn't get deselected (the usual behavior). This way you can (for example) first select objects A and B, then B and C, resulting in the cumulative selection of objects A, B and C. Normally object B would be *de*selected by SoExtSelection the second time it was selected.

    Internally, this deselection happens as follows: When you pick one or more objects, SoExtSelection checks to see if each picked object is already in the selection list and if it is, it removes it from the selection list. The result is that any object selected twice is deselected.

    CODE TO USE:
    To get around this behavior, you will need to use a pick filter callback. This callback allows you to take specific actions depending on what was picked.

    In the pick filter callback, you will check to see if the picked object is already in SoExtSelection's selection list.

    • If it is not, the callback should return the path to the picked object.
    • If it is, the callback should return an empty path to SoExtSelection. SoExtSelection will not be aware that the object has been repicked, and therefore will not deselect it. The result will be additive selection by SoExtSelection.


    Here's what the callback might look like:

    #include <Inventor/SoPickedPoint.h>
    

    SoPath *PickCB(void *userData,const SoPickedPoint *pick) { SoExtSelection *pSel = (SoExtSelection*)userData; SoPath *pPath = pick->getPath(); SoNode *pTail = pPath->getTail(); if (pSel->isSelected( pTail )) { return new SoPath; } else { return pPath; } }

    Here's the code to add the callback to your scene graph:

    root->setPickFilterCallback( PickCB, (void*)root );
    


  43. Back | Top

- end -