General Reference Info |
The creaseAngle field of the SoShapeHints node is one factor in whether one gets flat shading or not. The crease angle is the angle between the normals of two adjacent faces. If the angle between the normals is greater the defined creaseAngle then Inventor computes separate normals for each face and the object appears faceted.
The default crease angle is 0.5.
A material binding of PER_VERTEX_INDEXED allows a different color at each vertex, but shared vertices between faces will have the same color. If the angle between faces is less than the crease angle and Open Inventor is computing the normals, the faces will share the same normal, and the faces will be smooth shaded.
Note the "note" of page 120 of the Inventor Mentor: A shading model is not explicitly specified: shading is dictated by a combination of
One approach is to modify the geometry file for the dragger (found in the data\draggerDefaults folder of the install) so the unwanted part is replaced by an empty separator. This is discussed on pages 417 and 418 of the Inventor Mentor book.
Programmatically one can accomplish the same thing by using the setPart method of the dragger to set any part listed in the Helpfile description of the dragger to an empty Separator. Note that you must pass an empty Separator, as passing NULL will cause your program to crash.
We have an example program that shows the use of a custom node that does this. It is posted on the TGS ftp site:
ftp://ftp.tgs.com/private/FAQ/bkgrndimage.zipLook in the source code of the custom node to see what options are available.
Some geometry nodes can be decimated, and others can't depending on the algorithm used by the decimator. Here is the list of geometry nodes that can be decimated:
Here is the list of geometry nodes that cannot be decimated:
The decimator does not decimate nodes, such as SoCone, that adjust their complexity automatically (based on SoComplexity and/or the viewer-requested decimation level). The decimator also does not decimate nodes such as SoVRMLElevationGrid where decimation would not preserve the type of geometry. Decimated geometry is always an indexed face set.
SoLocateHighlight's highlighting behavior is implemented in OIV 2.5.2 (although it is not mentioned in the release notes).
Regarding the default value of the "mode" field, there is a difference between SGI's Open Inventor 2.1.x and TGS's Open Inventor.
This change was made to avoid what appears to be a performance problem with Open Inventor when viewing files that contain this node. The performance problem is the result of SoLocateHighlight disabling caching above itself. In most cases we observed that the highlighting was only meaningful in the original application that created the file. Since a viewer does not know the significance of the objects in the scene, e.g. what to do if one is clicked, we believe it is more important to have performance than this highlighting.
Open Inventor 2.6 and earlier does not support Boolean operations on geometry, so the easiest way to to create such an object is to use a modeller program such as Amapi 3D from TGS which does support such Boolean operations and which exports to Open Inventor (.iv) format or VRML 1.0 or VRML97 (.wrl) formats. Of course, another way is create your own object using face sets and coordinates.
Starting with Open Inventor 3.0, the SolidViz extension (a separately licensed Open Inventor toolkit extension) is available. It provides solid modeling nodes for describing 3D shapes as combinations of other shapes using Boolean operations such as merge, intersection, and subtraction.
Please note: SolidViz implements a visual simulation of CSG (Constructive Solid Geometry) shapes. A visual simulation of a CSG shape can be achieved in many cases at interactive speed by a special rendering method taking advantage of accelerated OpenGL rendering and the stencil buffer. See the Open Inventor help file for additional info.
This is a well known "feature" of Open Inventor. Using an SoVertexProperty node would not produce a different result.
The effect occurs because Inventor uses the same data for both Filled and Wireframe, just changing the OpenGL polygon mode. This is more efficient because Inventor does not have to recreate cached display lists and also does not have to include additional code in the primitive nodes to generate GL_LINES instead of GL_TRIANGLES.
It would be a lot of work to change Inventor's behavior.
The best thing we can recommend is to use a Switch node with an IndexedFaceSet and an IndexedLineSet. We think the actual coordinate data, which is the bulk of the memory usage, could be shared.
If you don't know what a quaternion is, you almost certainly don't care.
EXCEPT... that they are used in a few places in Open Inventor (in SbRotation, SoMFRotation, and SoSFRotation) -- often by people who don't intend to use them ;-)
If you specify a rotation with four arguments, for example,
SbRotation(float q0, float q1, float q2, float q3)
you are specifying a rotation as a quaternion which is NOT the same as specifying a direction and a rotation angle.
To specify a direction and an angle, you would use the following call:
SbRotation(const SbVec3f &axis, float radians)
If you are interested in learning more about quaternions, see the "Open Inventor Useful Links" page on the TGS web site for some references on the theory and practice of quaternions.
Here is the algorithm:
void SbRotation::getValue(SbMatrix &matrix) const { SbMat m; m[0][0] = 1 - 2.0 * (quat[1] * quat[1] + quat[2] * quat[2]); m[0][1] = 2.0 * (quat[0] * quat[1] + quat[2] * quat[3]); m[0][2] = 2.0 * (quat[2] * quat[0] - quat[1] * quat[3]); m[0][3] = 0.0;m[1][0] = 2.0 * (quat[0] * quat[1] - quat[2] * quat[3]); m[1][1] = 1 - 2.0 * (quat[2] * quat[2] + quat[0] * quat[0]); m[1][2] = 2.0 * (quat[1] * quat[2] + quat[0] * quat[3]); m[1][3] = 0.0;
m[2][0] = 2.0 * (quat[2] * quat[0] + quat[1] * quat[3]); m[2][1] = 2.0 * (quat[1] * quat[2] - quat[0] * quat[3]); m[2][2] = 1 - 2.0 * (quat[1] * quat[1] + quat[0] * quat[0]); m[2][3] = 0.0;
m[3][0] = 0.0; m[3][1] = 0.0; m[3][2] = 0.0; m[3][3] = 1.0;
matrix.setValue(&m[0][0]); }
No, there is not. In VRML, face sets are always filled solid and line sets are always "see through". There does not appear to be any reasonable way to specify a "hidden line" effect using the VRML file format. The closest you could come would be to insert the geometry twice -- once as a face set with the fill color set to the background color and once as a line set. Theoretically this would give the desired result, but in practice it is likely to produce lots of z-buffer artifacts ("stitching").
Any hidden line effect would have to be an option in the viewer. We do not know of any VRML viewers with this option, but all of our Inventor-based viewers have this option. Note however that only a face set can "hide" lines. Line sets are "see through" by definition.
With almost every new release of Open Inventor, TGS upgrades its VRML support. Go to the Table of Contents of the Open Inventor on-line Reference Manual. Select Modules/Inventor/VRML/VRML2.0 nodes to go to the VRML 2.0 page which lists all of the VRML nodes. Some nodes are marked with "Action not implemented" or "Action partially implemented."
If a node is marked "Action not implemented", Open Inventor can read and write VRML files containing that node. However, when it reads that node, it will not perform any action.
If a node is marked "Action partially implemented", see the help file to find out which features are implemented.
Unmarked nodes are fully implemented.
First you should read the Inventor Mentor and the TGS Extensions User Guide to understand the basic concepts of the Open Inventor scene graph. However this might not give you the "big picture" for this specific situation. Here is a mini-tutorial. But please note there are *many* ways to organize a scene graph, based on your application's requirements, so don't assume this is the only possible way! Generally this discussion is applicable to both VRML and Open Inventor files.
The important question is: In your VRML file, what distinguishs one "object" from another? You need to be able to answer this question in order to manipulate the objects as separate entities.
It's almost certain that each "object" will be a collection of nodes under some sort of grouping node. For example, in a VRML file:
DEF Object27 Transform { children [ Cone {} ] }
might be one (trivial) object in the incoming file.
Typically the file creator will assign a name to each grouping node that represents an independent object. For example, the file creator might simply assign them names "Object0000" through "Object0953". Since you are modeling the scene, you should adopt some useful naming convention. The node names will show up as "DEF name" in the output VRML file, as above.
When you read the file into Open Inventor (from TGS), each node in the file will result in a corresponding node in the scene graph. The grouping node above is an SoVRMLTransform, for example.
Let's say you wanted to find a particular object programmatically. If you know the objects all have unique names, you can do this:
SoNode *pNode = SoNode::getByName( "Object51" );
However this only gives you a pointer to the node, it doesn't tell you anything about where it is in the scene graph. To get that info, use a SearchAction like this (assuming sceneRoot is the root of the entire scene graph, i.e. what SoDB::readAll returned to you):
SoSearchAction sa; sa.setName( "Object43" ); sa.apply( sceneRoot ); SoPath *pPath = sa.getPath();SoNode *pNode = pPath->getTail();
If you want to be able to select objects by picking, look at the discussion about SoSelection and redrawOnSelectionChange in the Mentor (and other places). Remember though, that picking will select an actual geometry node, so the node at the tail of the selection path will be (for example) an SoVRMLIndexedFaceSet. To get a path to the grouping node that represents the entire object (which is presumably an ancestor of the geometry node), you have two choices. You can implement a pickFilterCallback (see the Mentor for details) or you can "manually" go up the path from the tail looking for a grouping with an appropriate name.
Now you have the objects identified by some naming scheme and can get a pointer (or path) to a specific object.
In order to control "object attributes" like position, scale, rotation, material, etc. in a classic Open Inventor scene graph you might put a set of attribute nodes as the first children of each object. For example:
DEF Object35 Separator { Transform {} Material {} ... other object nodes ... }
These nodes don't have to exist when the file is read in. You can create them when (and if) they are needed, for example, given a pointer to an object we want to rotate:
// given pNode points to the object's Separator SoTransform *pTran; SoNode *pFirstChild = pNode->getChild( 0 ); if (pFirstChild->isOfType(SoTransform::getClassTypeId())) pTran = (SoTransform*)pFirstChild; // Already got one else { pTran = new SoTransform; // Make one pNode->insertChild( pTran, 0 ); // Put as first child } pTran->rotation.setValue( SbRotation( ... // set rotation
There are many variations on this theme, for example giving the attribute nodes meaningful names. You could also use node kits for this purpose since they automatically create nodes (that are part of their catalog) when they are needed.
The position/rotate/scale part is actually easier with VRML nodes! The grouping node that defines an object will typically be an SoVRMLTransform node. This node is effectively the combination of an SoSeparator and an SoTransform. So given a pointer to an object you want to rotate, you only have to do this:
SoVRMLTransform *pNode = ... // somehow we got this pointer
pNode->rotation.setValue( SbRotation( ... // set rotation
You are on the right track using the ProjectToScreen method. Usually people want to go the other way -- from 2D screen coordinates to 3D modeling coordinates, and the boxZoom example in the src\Inventor\examples\Ivf folder (Win32 only) is a good example.
The only example we distribute that uses ProjectToScreen is the ToolMaker example in the src\Inventor\examples\ToolMaker\09.Highlights folder. The relevant code is in the OverlayHL.cxx file and on page 222 of the ToolMaker book, but you don't need the book as you have the source code. The purpose of the code is to render selected objects as 2D rectangles in the overlay planes. The updateBbox method in the code projects the 3D bounding box for the selected object onto the screen and renders it as a 2D rectangle.
We presume you are referring to the hidden line drawing mode that the viewers support. Yes, you do see the triangles. Open Inventor first draws the polygonal faces in the background color (OpenGL Polygon mode=GL_FILL) then renders the same vertices using OpenGLPolygon mode=GL_LINE. The faces are tessellated into triangles prior to being sent to OpenGL, hence the triangle edges that you see.
We do not currently have plans to implement the kind of hidden line removal you are describing. However, you can simulate the hidden line removal effect you are seeking as follows:
In your scene graph, insert the geometry twice -- once as a face set with the fill color set to the background color and once as a line set to define the edges. In order to avoid stitching, you might want to apply a slight offset using SoPolygonOffset.
Note: It is not necessary to insert the geometry coordinates twice. With a little bit of forethought, you can use the same coordinate node for the face set and the line set -- if you use indexed face sets and line sets.
The short answer is to use SoLOD rather than SoLevelOfDetail.
The long answer follows below:
SoLevelOfDetail is a deprecated class. We don't recommend its use. The cost of computing the screen area (which involves quite a bit work -- described below) tends to overwhelm the benefits of using SoLevelOfDetail. The computation may take more time than it would have taken to just render the geometry.
We would recommend the use of SoLOD instead because the computation it uses to choose a particular child (distance to the camera) is much simpler and yields the same kind of results: close to the camera, you get a more detailed picture; further back, you get fewer details.
Here is information on LevelOfDetail from the help file:
The children of this node typically represent the same object or objects at varying levels of detail, from highest detail to lowest. The size of the objects when projected into the viewport is used to determine which version to use (i.e., which child to traverse).The size is computed as the area of the screen rectangle enclosing the projection of the 3D bounding box that encloses all of the children. When rendering, this size is compared to the values in the screenArea field. If the size is greater than the first value, child 0 is traversed. If it is smaller than the first, but greater than the second, child 1 is traversed, and so on. If there are fewer children than are required by this rule, the last child is traversed. The screenArea field contains just 0 by default, so the first child is always traversed.
The size calculation takes the current complexity into account. If the complexity is 0 or is of type BOUNDING_BOX, the last child is always traversed. If the complexity is less than .5, the computed size is scaled down appropriately to use (possibly) a less detailed representation. If the complexity is greater than .5, the size is scaled up. At complexity 1, the first child is always used.
Note that the SoLOD node is similar to SoLevelOfDetail, except the switching between levels in the SoLOD node is based on distance from the camera, which is faster than using screen area.
(Even the help file implicitly recommends SoLOD :-)
This means that
The bottom line is that this is a lot of computation. Using an SoLOD node you can get roughly the same effect much more quickly and cheaply.
Yes. The main component that Open Inventor provides over OpenGL are viewers that allow the viewing and manipulation of your dataset. You can either have your application read your data and place it in Open Inventor primitives and display the resulting graphics database that is generated (called the scene graph) using a render area or viewer (and optionally write out the scene graph to a file in the Inventor file format, which can be read by the application or other viewer application such SceneViewer, which is supplied with Open Inventor), or you can manually or programmatically create an Inventor file with your data in it which can be read by any Inventor viewer application such as SceneViewer.
Open Inventor has a PointSet primitive which is an efficient way of displaying point data. Your description of the application implies that you may also need to display text (the data value) at each point. In order to do this efficiently you might want to subclass the PointSet object and create a custom Open Inventor node for displaying your data. The Open Inventor class library is highly extensible for this reason. Custom nodes inherit the powerful built-in methods for displaying, picking, searching, etc., in the scene graph.
Also note that Open Inventor automatically optimizes its use of the OpenGL library for rendering. For example, when appropriate, Open Inventor will create OpenGL display lists so that rendering can proceed at full speed with (almost) no overhead for traversing the scene graph.
The recommended method is:
#include <Inventor/actions/SoGLRenderAction.h> #include <Inventor/elements/SoGLCacheContextElement.h>The call to "shouldAutoCache" shown above will prevent Open Inventor's automatic render caching mechanism from including this node in a render cache. In other words, no Separator (or other caching node) above this Callback node will be able to create a render cache. You should make sure that any "sibling" geometry that should be cached is under its own Separator.void myCallback( void *userData, SoAction *action ) { SoState *state = action->getState(); if (action->isOfType(SoGLRenderAction::getClassTypeId())) { SoGLCacheContextElement::shouldAutoCache(state, SoGLCacheContextElement::DONT_AUTO_CACHE);
// Other rendering code } }
If necessary you can use this call:
SoCacheElement::invalidate( action->getState() );
to completely eliminate the possibility of caching this node.
Sure, more than one. Here are some possibilities:
The gdiRect example is available from the TGS ftp site:
ftp://ftp.tgs.com/private/FAQ/gdirect.zip (25172 bytes)
SoCone *cones[]=new[] SoCone;but it doesn't work!
Create cones:
const int numCones = 10;
SoCone **cones = new SoCone* [numCones];
for (int i = 0; i < numCones; i++) {
cones[i] = new SoCone;
cones[i]->ref(); // So we control lifespan
}
Clean up cones:
for (int i = 0; i < numCones; i++) {
cones[i]->unref(); // destroy
}
delete [] cones;
Note that you cannot allocate an array of SoCone objects or of any other reference counted object. The reason is that reference counted objects will "automagically" destroy themselves when their ref count goes to zero. If an object in the middle of your array destroyed itself and tried to free its memory (in the middle of the allocated block of memory), at best your app would crash.
By default the callback node may become part of a render cache (OpenGL display list). In this case, so long as the cache remains valid, the callback node will not be traversed by the render action. If the callback function always makes the same OpenGL calls, this might be OK since the OpenGL calls will be part of the cache. But if the callback function interacts with the application or performs different actions at different times, then it should disable render caching. For example:
SoState *state = action->getState();
SoGLCacheContextElement::shouldAutoCache(state,
SoGLCacheContextElement::DONT_AUTO_CACHE);
When Open Inventor reads the file, it processes the first file header as the file header. Subsequent file headers in the file are treated as comments.
See also the related FAQ question "Can you give me more info about writing a 'self-contained' file?"
There are a couple of general approaches to this problem.
Once the original .iv file has been read, all of the information
is in memory. You can tweak the scene graph so that when the scene
graph is saved, everything is saved in one file.
For instance, an SoTexture2 node has the mutually exclusive filename
and image fields. If the filename field was used in the original file,
then normally when you save the file, the filename field would again
be used, and the texture file would be external to the .iv file. However,
if the touch method is called on the image field, Open Inventor detects
the change and will use the image field when the file is saved, thereby
saving the texture in the .iv file. Note that texture images embedded
in a .iv file are not compressed. The default ASCII format for images
is very verbose.
Whether making a list of filenames or causing the data to all be saved in the .iv file, the first step is the same -- that is, finding the nodes with the external file references.
See also the related FAQ question "How can I save a scene graph such that it is 'self-contained'?"
CAN BE SELF-CONTAINED
The following nodes have image fields as well as filename fields. If you
call the "touch" method on the image field, Open Inventor detects the
change and will use the image field when the file is saved. This way the
texture (i.e., the contents of the image field) is written to the .iv
file.
ivcat is a utility that can be used to replace SoFile nodes with their children and to expand SoTexture2 nodes to explictly include the texture data rather than a file name. See the Open Inventor help file for information on ivcat.
CANNOT BE MADE COMPLETELY SELF-CONTAINED
Some nodes intrinsically get their data from external files. A scene graph
that contains any of these nodes cannot be made completely self-contained.
Here's a list of them. (As new nodes are added, this list may need to
be expanded.)
The brute force way is set the ignore flags of all of the node's fields to TRUE. The more elegant way is to replace the node with a switch node and place the node under it. By "switching" the node out of the scene graph traversal, one essentially removes its effect on traversal.
The following file on the TGS ftp site:
ftp://ftp.tgs.com/private/FAQ/writev21action.zip
contains a customized SoWriteAction that disables writing for nodes added after V2.1. (Note that it does not disable the VRML nodes. However, it would easy to add them following the pattern in the attached code.)
Also included is a simple "test drive" program.
Open Inventor's conversion is the most straightforward one and is based on the "window" size specified to SbViewportRegion (this is the width and height of the window on screen in pixels). So given width and height in pixels:
int x_pixelVal;
float x_ndcVal;
x_pixelVal = (int)(x_ndcVal * width);
x_ndcVal = (float)x_pixelVal / (float)width;
Once you set override on a Material node, for example:
pMaterial->setOverride(TRUE);
the override is in effect for all nodes at or below this level in the scene graph. Simply calling setOverride(FALSE) on a subsequent Material node has no effect because the previous override causes this node to be ignored. However you can turn off override by accessing the SoOverrideElement directly in a callback (see the SoCallback node). Note that some actions do not enable all the traversal state elements, so your callback function should check that the current action is an SoGLRenderAction.
See SoOverrideElement.h for the methods to disable override on other traversal state elements.
Example callback function:
void disableOverrideCallback( void *myData, SoAction *action ) { if (!action->isOfType(SoGLRenderAction::getClassTypeId())) return;SoState *state = action->getState(); SoNode *node = action->getCurPathTail();
SoOverrideElement::setAmbientColorOverride(state, node, FALSE); SoOverrideElement::setDiffuseColorOverride(state, node, FALSE); SoOverrideElement::setEmissiveColorOverride(state, node, FALSE); SoOverrideElement::setSpecularColorOverride(state, node, FALSE); SoOverrideElement::setShininessOverride(state, node, FALSE); SoOverrideElement::setTransparencyOverride(state, node, FALSE); }
First, get the SoPickedPoint object. You can get this directly from an SoRayPickAction. If you use SoSelection, you have to provide a pickFilter callback to get the SoPickedPoint.
Once you have an SoPickedPoint, call getPath() and look at the typeId of the tail node. That will tell you which subclass of SoDetail will be returned from getDetail() (you have to cast the return value). In this case, an IndexedFaceSet, it would be an SoFaceDetail.
The getFaceIndex() method tells you which face in the primitive was picked.
Something like this:
const SoPickedPoint *pPickedPt = ...SoPath *pPickPath = pPickedPt->getPath(); SoNode *pTailNode = pPickPath->getTail(); if (pTailNode->isOfType(SoIndexedFaceSet::getClassTypeId())) { const SoFaceDetail *pFaceDetail = (SoFaceDetail*)pPickedPt->getDetail(); int faceIndex = pFaceDetail->getFaceIndex(); }
There is already an example program (available via ftp below) that defines a custom node for doing a textured background image. One solution would to be provide a texture image that is a pre-defined color gradient.
However, it might be better to start with the supplied example code and modify it so that the OpenGL face set on which the texture is mapped has per-vertex color. Depending on the colors you specify for each of the vertices, you can produce a variety of gradients, including, but not limited to vertical, horizontal, and diagonal.
The example program is posted on the TGS ftp site:
ftp://ftp.tgs.com/private/FAQ/bkgrndimage.zip
Note that if you are drawing a background, you may be able to get a small performance increase by disabling the automatic glClear done by the Open Inventor viewer.
Yes! We have a sample (VC++) program showing how to do this. It can be downloaded from:
ftp://ftp.tgs.com/private/FAQ/getnormals.zip
Only vertex shapes, classes derived from SoVertexShape, have explicit normals. It is not currently possible to get the normals used for SoCone, SoSphere, etc. In general, there are three cases:
The Mentor example in %OIVHOME%\src\Inventor\examples\Mentor\Cxx\09.1.Print.cxx does pretty much what you are requesting.
You will notice one difference between the view you see on the display and the view you get in the output file -- the background color. As noted in the comments in the code, a white background is used instead of black -- it uses up less ink or toner :-)
BACKGROUND INFO:
Additive mode means that if you select an object using SoExtSelection
and then you select it again, it doesn't get deselected (the usual behavior).
This way you can (for example) first select objects A and B, then B and
C, resulting in the cumulative selection of objects A, B and C. Normally
object B would be *de*selected by SoExtSelection the second time it was
selected.
Internally, this deselection happens as follows: When you pick one or more objects, SoExtSelection checks to see if each picked object is already in the selection list and if it is, it removes it from the selection list. The result is that any object selected twice is deselected.
CODE TO USE:
To get around this behavior, you will need to use a pick filter callback.
This callback allows you to take specific actions depending on what was
picked.
In the pick filter callback, you will check to see if the picked object is already in SoExtSelection's selection list.
Here's what the callback might look like:
#include <Inventor/SoPickedPoint.h>SoPath *PickCB(void *userData,const SoPickedPoint *pick) { SoExtSelection *pSel = (SoExtSelection*)userData; SoPath *pPath = pick->getPath(); SoNode *pTail = pPath->getTail(); if (pSel->isSelected( pTail )) { return new SoPath; } else { return pPath; } }
Here's the code to add the callback to your scene graph:
root->setPickFilterCallback( PickCB, (void*)root );
- end -