Semantic Annotation of 3D Digital Representation of Cultural Artefacts
PhD Student
The University of Queensland
eResearch Lab, School of ITEE
Brisbane QLD 4072, Australia
Ph: +617 3365 1092
chih.yu@uqconnect.edu.au
Abstract
This paper is an extended abstract that describe the significance of the 3D Semantic Annotation research project and its up-to-date result. The paper shows a brief understanding of how annotation service was recognised by the cultural heritage institutions and some major problems that exist in current 3D annotation services. Several possible solutions are documented such as allowing attaching annotations to meaningful parts (3D regions or segments) and ontology-based annotations (Folksonomy and taxonomy hybrid), which potentially can enhance the ability to index and search in a large digital repository. In the later section, this paper will mention the current progress of the research--algorithms that make possibility to attach annotation and migrate between 3D format and a 2.5D format. This paper also defines the methodology and plan about this project.
Categories and Subject Descriptors
H.3.1 [Information Storage and Retrieval]: Content Analysis and Indexing – indexing methods.
H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval – clustering, retrieval models, search process, selection process.
H.3.5 [Information Storage and Retrieval]: Online Information Service – data sharing, web based service.
H.3.7 [Information Storage and Retrieval]: Digital Libraries – collections, user issues.
General Terms
Algorithm, Design, Theory, Experiment.
Keywords
3D, artefact, cultural, heritage, migration, semantic, annotation.
1. BACKGROUND
in 3D data acquisition, processing and visualization technologies are providing museums and cultural institutions with new methods for preserving cultural heritage and making it more accessible to scholars and the public, via online search interfaces. Increasing numbers of museums are using 3D scanning techniques to overcome the limitations of 2D data representations and to improve access to high quality surrogates of fragile and valuable artefacts via the Internet [1-4]. Although some museums have been simulating 3D by capturing multiple 2D images of an object and combining them using QuickTime, the underlying 2D representation does not provide the rich decorative, structural and topological information required by serious scholars. The trend is increasingly towards the use of 3D laser scanners to capture precise 3D digital models that can be accurately analysed, measured and compared.
As the size of online collections of 3D artefacts grows, the ability to search and browse these huge repositories becomes more difficult. Museums are finding the cost of providing metadata for their collections prohibitive and are keen to explore how they might exploit social tagging and annotation services [5]. High quality annotations --attached to both the complete object as well as to specific segments or features--have the potential to significantly improve the relevance of retrieved search results. Although there already exist some annotation services for 3D objects, they are designed for specific disciplines or depend on proprietary software and formats. The majority also only support the attachment of annotations to the whole objects, points or 2D regions--not to 3D surface segments, surface patterns or specific object parts (e.g., the handle on a pot).
2. SIGNIFICANT RESEARCH PROBLEM
The aim of the 3DSA (3D Semantic Annotation) project is to develop open annotation (creation, browse, search and presentation) services for 3D digital representations of museum objects.
More specifically the aim is to develop software services to support:
- Migration of annotations between different (3D and 2.5D) representations of the same object to increase knowledge sharing and accessibility;
- Semantic annotations--machine-understandable tags that are drawn from an ontology. Our aim is to use the CIDOC-CRM [7] ontology, which has been designed specifically for the museum community. Ontology-based annotations are valuable because, in addition to validation and quality control, they allow reasoning about the object they are annotating and enable the annotated resources to become part of the larger Semantic Web;
- Annotation of meaningful parts or features of a 3D model. The difficulty lies in specifying the particular feature of interest via simple drawing, selection and segmentation tools. Drawing the boundary of a 3D surface feature or a 3D segment can be very difficult and time consuming.
- A common model for attaching annotations to 3D artefacts regardless of their format. Such a model also enables re-use and display of annotations across different annotation clients.
Hence the research questions that I plan to tackle are:
- How does one automatically migrate annotations between 3D and 2.5/2D formats?
- What are the optimum technologies for attaching semantic annotations to 3D models (including point annotations, surface region annotations and segment annotations)?
- Which ontologies are applicable for labeling 3D cultural heritage objects?
- What common models exist for facilitating annotation re-use across target object representations and across annotation clients?
- How does one combine automatically extracted low level feature descriptions with manually-attached segment tags to infer high level semantic descriptions of 3D objects? e.g., if an object is of this colour, texture, shape, size and is labeled with these tags (vase, handle, decoration) then it can be inferred that is a Ming Vase from the 13th century.
3. RELATED WORK
Most prior work in the field of 3D annotations has focused on the annotation of discipline-specific objects--for example, architectural and engineering CAD drawings [7,8], 3D crystallography models [26] and 3D scenes [27]. All of these systems enable users to attach annotations to 3D models and to browse annotations added by others, asynchronously. However, they are all highly dependent on the discipline-specific format of the target objects. A survey of existing systems failed to reveal any interoperable, collaborative, Web-based annotation systems for 3D models of museum artefacts that enable descriptive text or semantic tags to be attached (either to the whole object or a point or region on the object)--and then saved to enable later, asynchronous searching, browsing and response, by other users.
Projects such as SCULPTEUR [9], the Princeton 3D search engine [10] and Columbia Shape Search [15] use a combination of machine learning (to extract colour, pattern and shape) and application semantics (who, what, where, when etc.) to automatically cluster 3D objects. However, these projects fail to take advantage of community-generated tags and annotations drawn from ontology-directed folksonomies. Hunter et al [14, 15, 16] have previously applied semantic inferencing rules to enable the automated annotation of 2D images – and demonstrated improvements in concept-based search performance. Hunter et. al. have also developed annotation tools for 3D museum artefacts, based on the Annotea model [13]. But this previous work has only enabled the attachment of tags and comments to 3D points and/or views of the complete object. Other relevant prior work is in the area of segmentation of 3D models [28] and the attachment of semantic annotations to segments [29, 30]. Previous segmentation approaches have involved mesh segmentation approaches and manual or user-guided approaches [12, 31]. However, as far as we are aware, none of this work has been applied in the context of museum artefacts – or has used the CIDOC CRM ontology for the semantic labels. Our aim is to adopt an approach similar to Ji et al [12] to perform the segmentation and to apply and evaluate it within the context of a specific collection of 3D museum artefacts.
Finally we are interested in developing a common model for annotating different representations of the one 3D object and for automatically migrating the annotations between models and between annotation clients. The Open Annotation Collaboration (OAC) [32] has developed an alpha data model to support interoperability, sharing and exchange of annotations. I plan to evaluate this model in the context of sharing and re-use of annotations on 3D museum artefacts.
4. APPROACH AND RESULTS TO DATE
To evaluate the proposed annotation services, we are currently using Indigenous wooden ceremonial sculptures from the Wik peoples of Western Cape York. This collection of wooden, ochre-painted sculptures is held in the UQ Anthropology Museum. Indigenous artists from Cape York are interested in emulating and extending the techniques used by artists from these earlier periods. They would like to be able to access high resolution 3D versions of the sculptures without having to travel to Brisbane for long periods. In addition, the UQ Art Museum is developing an exhibition around these sculptures, to open in 2010. The aim of this project is to work with the UQ Anthropology Museum curators, Indigenous artists from Cape York and the UQ Art Museum to develop a virtual collection of 3D models which can be used for remote access, collaborative annotation, knowledge sharing, exhibition development and the evaluation of this project’s outcomes.
In the first phase of the project, we have developed a web based prototype 3D annotation tool that enables:
- Annotation of points on 3D/2.5D models captured in different resolutions;
- Browsing, retrieval and display of annotations;
- Automated migration of annotations across 3D models of resolutions (high-polygon and low-polygon);
- Automatic migration of annotations between 3D models and 2.5 D VR object (that comprise a sequence of 2D images).
Figure 1: Screen shot of the Web-based 3D annotation prototype
The prototype (shown in Figure 1) is accessible via a link from the project website’s 3D gallery (Figure 2) . The artifacts were scanned using a Konica Minolta Vivid9i non-contact 3D laser scanner and the 3D digital models were generated using GeoMagic software. The 3D model was initially scanned into a VRML format, stored in Collada format using Autodesk Maya and converted into O3D format (Google’s 3D scene graph API)) using Google’s converter. At this stage, the project has generated a sample set of Indigenous artefacts for evaluation purposes. More artefacts from a variety of backgrounds and materials will be scanned in the future to more fully evaluate the search and indexing features.
Figure 2: Project Web Site and 3D Gallery.
The prototype was developed using a combination of Web 2.0 technologies and third party services:
- 3D viewer – Google’s O3D scene graph API, provides a browser plugin with shader based, low-level graphics API and a high level JavaScript library. O3D is flexible, extensible [17], cross compatible, open source and Google’s proposal as an open web standard for 3D [18].
- 2.5 VR object viewer – Developed using Adobe Flex, free, open source framework for building web applications using Actionscript 3.0.
- Annotation storage – AJAX and Danno, an HTTP-based repository that provides APIs for creating, updating, deleting and querying annotations and replies, and for bulk upload and harvesting of annotations [19].
- User interface - AJAX, PHP and jQuery, a JavaScript Library that simplifies HTML document traversing, event handling, animating, and Ajax interactions for rapid web development [20].
An average 3D digital model for a cultural heritage artefact contains 50-200 MB of data to be processed by the client side’s processor. However, not everyone’s computer is capable of performing this task continuously, especially in a web browser environment, which is not designed specifically to be a 3D rendering tool. In order to support users with limited computation power or limited internet bandwidth, we generate three different representations for each artefact plus one archival version (stored in the original raw copy). Mechanisms for precisely and automatically migrating annotations between between these formats are important to ensure annotation tags are displayed, attached to the correct position for all formats. Below is a list of the different representations we generate for each object:
- Archival quality 3D model (Raw 3D data): For storage purposes, not accessible online.
- High quality 3D model for web display: Online quality display for users who have standard CPU and internet speed (Figure 3).
- Low quality 3D model for web display: Compressed version for users whose processor have limited graphical power or slow internet.
- 2.5D VR object for Web display: Non-3D version, suitable for users who do not have a graphical processor and with a very slow internet connection (Figure 4).
The archival quality is not used to display and annotate in the prototype, but to store as a raw copy that can create various resolution and formats. Both high quality and low quality 3D models are displayed in a 3D Viewer developed by O3D API. The 2.5D VR object version is stored as a standard Flash/Flex file and viewed using Flash.
Figure 3: High quality/polygon 3D version using pointers as annotation representation.
Figure 4: 2.5D VR object version using circles as annotation representation.
The 3D annotation project requires an understanding of linear algebra, trigonometry, vector algebra and matrices [21]. Below are brief descriptions of how the annotation features of our prototype work within 4 different scenarios.
4.1 Attaching annotations to a single point on a 3D model.
Imagine our computer screen is the starting point and there is a virtual 3D object behind it. Once a point had been clicked, the screen will detect mouse position (X, Y) and start casting a ray from that point. If the ray intersects with the 3D object, it determines a world position (X, Y, Z) in the virtual environment. This algorithm is inbuilt within O3D library and is not required to be programmed. However, the result does not represent the selected local position on the 3D model. What has been selected is simply a position in a 3D world space where the ray hit the object, not a local position that refers to the chosen object. So the final step requires conversion of a world position into a local position using:
LP(X,Y,Z)= RI(X,Y,Z)×(LM)-1
LP(X,Y,Z)= Local Position
RI(X,Y,Z)= World Position that the ray had interescted with object
LM=Local matrix
4.2 Attaching annotations to a single point on a 2.5D VR object.
This process is simple since a 2.5D VR object is a series of images that simulates the rotation of a 3D object. Determining the annotated location involves simply recording (X, Y) values. One important thing to consider is that a computer cannot differentiate between whether the picked point is on the object or the background, since 2D imagery does not contain any volume data. To make this function available requires the object to be extracted from the image, or else the selected position can be located outside the object itself. Therefore, it is impossible for the computer to determine what has been selected using pure vector math. One way to make this possible is to use colour detection methods [33] to identify colour differences between the background and the object (assuming they exist). It is necessary for the background to have a singular colour (non-gradient or no pattern) that is distinct from the object’s colour and texture, so that the computer can easily recognise whether the selected point is on the object or background. However, due to the lack of volume information, it is extremely difficult to migrate an annotation from a 2D image to a 3D object.
4.3 Annotation migration between high quality 3D model and low quality 3D model.
This task is not particularly difficult. Since the annotation tags are stored separately but appear simultaneously with the 3D models, the migration process is direct and the results are accurate. For this process to work accurately, the low quality 3D model has to be the derived directly from the high quality model, by merging polygons. The local matrix (related to rotation) and the 3D space are required to be precisely the same for the two 3D models.
4.4 Annotation migration between a 3D model to a 2D image or a 2.5D VR object.
3D to 2D or 2.5D migration requires an understanding of 3D programming and related mathematic theories; it is not as simple as merely taking away all the Z values. It uses the same algorithms as the projection and rendering of a 3D object onto a 2D monitor screen. For transferring 3D coordinate (X, Y, Z) to 2D coordinate (X, Y), the formula below is implemented [24]
P(X,Y,Z)=LP(X,Y,Z)×WM ×VM ×PM
P(X,Y,Z)= Point that had been projected on to a 2D screen.
LP(X,Y,Z)= Local position of the object.
WM=World matrix.
VM=View matrix.
PM=Projection matrix.
The above formula produces a projected location of a point onto a 2D screen, the X and Y value becomes the surface of the computer display, and the Z value becomes the size of the object [24]. However, the projected X and Y value still requires resizing to fit with the 2D element. This can be done by using this formula:
2DP(X)=0.5 ×(P(X)+1) ×2DW
2DP(Y)=0.5 ×(1-P(Y) ) ×2DH
2DP(X)= Projected point's left position on 2D element.
2DP(Y)= Projected point's top position on 2D element.
P(X)=Projected point's left position without considering the size of 2D element.
P(Y)=Projected point's top position without considering the size of 2D element.
2DW=2D element's width.
2DW=2D element's height.
The 2.5D VR object or 2D images contains no volume data; therefore, all the points are projected onto a 2D plane. However, any points that are hidden on the backside of the object should not be appearing on the 2D plane. Back-face culling determines whether a point on a graphical object is visible or facing away from the camera [25]. This requires determining the surface normal vector of the 3D position. The surface normal vectors have to be converted into radians before doing backface culling. If the radian of the surface normal is smaller or equals to 1.5, it means the point is facing towards the camera and therefore display the annotation tag, or otherwise it hides the annotation tag. Below is the formula used in this technique:
N(X,Y,Z)= Normal vector of the object in world space.
LN(X,Y,Z)= Local normal vector of the object that does not associate with world space.
WM=World matrix.
C(X,Y,Z)=Reverse transformed camera location vector .(Often [0,0,1])
R=Radian for culling.
The migration of annotations from a 2D or 2.5D representation to a 3D representation requires storage of both the world matrix (world rotation) and the (X, Y) values. The migration process requires retrieval of these values and feeding them into the 3D viewer. The 3D viewer will rotate the object using the world matrix and attach the annotation at the (X, Y) position. By doing so, annotations can be migrated across to 3D and eventually back to 2.5D correctly using the previous method.
5. RESEARCH METHODOLOGY & PLAN
The proposed approach involves 11 stages (Figure 5):
Figure 5: A diagram describes the digitising process
- 3D Model acquisition – this phase involves obtaining 3D representations (VRML) of selected sculptures from the UQ Anthropology museum using a portable laser scanner (Konica Minolta VIVID 9i) located at the eResearch Lab at the University of Queensland. The scanner acquires a number of partial scans of the object via an automatically calibrated turn-table. These partial scans are then registered and merged into a single object.
- Post-processing and texture mapping – this phase involves merging the partial scans into the complete model, cleaning up the data points, and then texture mapping the surface images onto the triangulated mesh.
- Development of the annotation creation, browsing, retrieval and display interface using a combination of the 3D viewer (O3D scene graph API), and Web 2.0 technologies – and the refined OAC data model.
- Refinement of Danno (UQ developed annotation server) for the annotation creation and storage services.
- Development of the automatic annotation migration feature to the prototype – to migrate a single annotation automatically between different representations of the one digital object.
- Development and integration of the segmentation/3D feature specification tools using a user-guided approach similar to [12].
- Extension of the CIDOC CRM ontology to define concepts specific to the Wik community.
- Development of search, browse and visualization services/user interface over the stored annotations.
- Specification and evaluation of SWRL rules to infer high-level labels from combinations of low-level features [14, 15, 16]. For example: if shape is like this and decorative motif is like this and colours like these then it is “a Wik carving about Winchanam (bonefish totem)” .
- Usability studies, evaluation, feedback and refinement – this stage involves working with both the museum curators and some of the Indigenous artists from Cape York to acquire their feedback on the system’s usability and application;
- Final review and publications.
To date, I have completed the first 5 steps in this methodology. The next phase involves investigating the annotation of 3D surface regions and 3D segments of digital objects--and mechanisms for migrating these between versions (3D representations of different resolution and 2.5D versions).
6. CONTRIBUTIONS AND NOVELTY
3D annotation services are not new but in the past they been highly application-dependent or format-dependent. In addition, past 3D annotation tools are proprietary software applications that only enable point-based annotation. The approach described here is novel because it is web-based, open source, flexible and interoperable across 3D versions/formats and across annotation clients. It will also support point, region and segment annotations. The most comparable systems are probably the ShapeAnnotator [22] and Adobe Acrobat 3D [23].
Figure 6: Screen shot of the ShapeAnnotator
The ShapeAnnotator (Figure 6) mainly focuses on automatic segmentation to separate a 3D object into different segments [22]. It is not a web-based application; users are required to download and install a 13MB file and the annotations are not easily shared. The models have to be downloaded separately and the annotations cannot easily be shared. ShapeAnnotator does not enable users to annotate whatever region they select, only pre-identified segments. This application also does not display textures for 3D models, which makes it even harder for the user to see the real object. The interface is difficult for a first time user. The ShapeAnnotator is a proof of concept that supports auto-segmentation of parts that are annotatable. It provides a useful service complementary to our objectives.
Figure 7: Acrobat 3D does not allow annotation of the 3D object directly on PDF.
Adobe Acrobat 3D (Figure 7) allows annotations done in different parts, however not by decomposing one single 3D model into separate segments dynamically, but different parts are separated as individual objects before-hand using 3D CAD software and saved into a single file. This is equivalent to displaying multiple 3D objects in one single scene file and enabling annotations of one object at a time. The annotation does not attach to the 3D model but is placed outside of the 3D view. The annotations cannot be created directly onto the object stored in PDF without downloading the PDF file and using its proprietary SDK. The documentation is lengthy and attaching annotations seems to require some programming skill, which can be difficult for certain user groups [23].
Our project provides an online web-based 3D annotation service that does not require a steep learning curve to create annotations. Annotations can be created dynamically via a Web plug-in without downloading and uploading the 3D model manually. The O3D plugin is only 550KB download size and can be automatically downloaded and installed directly from Google. Annotations can be attached to 3D objects in different formats (high-res, med-res, low-res) and migrated between them. This enables even users with slow computational or graphical processors or limited bandwidth to use our service.
Future aims include enabling users to attach annotations to surface regions and 3D segments. In addition I will enable tags to be extracted from folksonomies and/or the CIDOC/CRM ontology. (ontology-directed folksonomies), and support faceted search. This project will differ from the previously mentioned projects such as SCULPTEUR, the Princeton 3D search engine and Columbia Shape Search, in which indexing is entirely based on machine learning and semantics but fails to take advantage of folksonomic tags. It will also differ from other applications that only use folksonomic tags and keyword search which often creates inconsistent and inaccurate search results. Our project will attempt to combine both user-generated tags and automatic feature extraction to produce a hybrid that enhances the discovery of 3D cultural artefacts. The outcome will include a test-bed and online digital repository/gallery of 3D cultural heritage artefacts enriched with both manually-generated and automatically-generated metadata to enable fast accurate search and retrieval of 3D objects by both museum experts and the general public.
References
[1] | J. Hunter, R. Schroeter, B. Koopman, and M. Henderson, “Using the semantic grid to build bridges between museums and indigenous communities,” in Proceedings of the GGF11-Semantic Grid Applications Workshop, June 10, 2004, pp. 46–61. |
[2] | “3D digital preservation of cultural heritages,” ikeuchi Lab: University |
[3] | J. Rowe and A. Razdan, “A prototype digital library for 3D collections: Tools to capture, model, analyze, and query complex 3D data,” in Museums and the Web 2003, D. Bearman and J. Trant, Eds. Toronto: Archives & Museum Informatics, 2003, pp. 147–158. |
[4] | Isler, V., Wilson, B., and Bajcsy, R. 2006. Building a 3D Virtual Museum of Native American Baskets. Proceedings of the Third international Symposium on 3D Data Processing, Visualization, and Transmission (3dpvt'06). 3DPVT. IEEE Computer Society, Washington, DC, 2006, 954-961. |
[5] | Chun, S, Cherry R, Hiwiller D, Trant J. and Wyman, B. “Steve.museum: An Ongoing Experiment in Social Tagging, Folksonomy, and Museums,” Museums and the Web 2006, Albuquerque, March, 2006. |
[6] | CIDOC Conceptual Reference Model, ICOM, [Online]. Available: http://cidoc.ics.forth.gr/ |
[7] | Jung T., Do E. Y. and Gross M. D., Immersive Redlining and Annotation of 3D Design Models on the Web, 8th International Conference on Computer Aided Architectural Design Futures. Kluwer, 1999. |
[8] | Jung, T., Gross, M. D., and Do, E. Y. 2002. Annotating and sketching on 3D web models. In Proceedings of the 7th international Conference on intelligent User interfaces, San Francisco, California, USA, 2002. IUI '02. ACM, New York, NY |
[9] | Addis et al., New Ways to Search, Navigate and Use Multimedia Museum Collections over the Web, in J. Trant and D. Bearman (eds.). Museums and the Web 2005: Proceedings, Toronto: Archives & Museum Informatics, 2005. |
[10] | “Princeton 3D model search engine,” Princeton Shape Retrieval and Analysis Group. [Online]. Available: http://shape.cs.princeton.edu/search.html |
[11] | C. Goldfeder and P. Allen, "Autotagging to Improve Text Search for 3D Models," IEEE Joint Conference on Digital Libraries (JCDL), Pittsburgh, July 2008. |
[12] | Ji, Z., Liu, L., Chen, Z., and Wang, G. Easy mesh cutting. Computer Graphics Forum 25, 3 pp. 83–292, Sept. 2006. |
[13] | R.Schroeter, J.Hunter, J.Guerin, I.Khan, M.Henderson, “A Synchronous Multimedia Annotation System for Secure Collaboratories”, e-Science 2006, Amsterdam, Netherlands, Dec 4-6, 2006. |
[14] | L.Hollink, S.Little, J.Hunter, “Evaluating The Application of Semantic Inferencing Rules to Image Annotation”, Third International Conference on Knowledge Capture, KCAP ’05,Banff, Canada, Oct 2-5, 2005. |
[15] | J.Hunter, S.Little, "A Framework to enable the Semantic Inferencing and Querying of Multimedia Content", International Journal of Web Engineering and Technology (IJWET) Special Issue on the Semantic Web, vol 2, nos 2/3, 2005. |
[16] | S.Little, J.Hunter, "Rules-By-Example - A Novel Approach to Semantic Indexing and Querying of Images", International Semantic Web Conference ISWC 2004, Hiroshima, Nov 2004; |
[17] | M. Papakipos, “Introducing O3D“, April 20, 2009 [Online]. Available: http://o3d.blogspot.com/2009/04/toward-open-web-standard-for-3d.html |
[18] | Google O3D official website FAQ, [Online] Available: http://code.google.com/p/o3d/wiki/FAQs |
[19] | Metadata.Net Danno / Dannotate Overview, [Online]. Available: http://metadata.net/sites/danno/index.html |
[20] | jQuery Official Website, [Online] Available: http://jquery.com/ |
[21] | Vector Math for 3D Computer Graphics, Fourth Revision, July 2009. [Online]. Available: http://chortle.ccsu.edu/VectorLessons/vectorIndex.html |
[22] | M. Attene, F. Robbiano, M. Spagnuolo & B. Falcidieno,”Semantic Annotation of 3D Surface Meshes based on Feature Characterization”, Genova, CNR, Italy |
[23] | Adobe Acrobat 3D Annotation Tutorial, July 27th 2005 [Online] Available: http://www.adobe.com/devnet/acrobat/pdfs/3DAnnotations.pdf |
[24] | R. Koci, “Computer Graphics Unveiled – World, View and Projection Matrix Unveilded” [Online] Available: http://robertokoci.com/world-view-and-projection-matrix-unveiled/ |
[25] | P. Laurila, “Geometry Culling un 3D Englines”, Sep 10th, 2000 [Online] Available: http://www.gamedev.net/reference/articles/article1212.asp |
[26] | Hunter, J., Henderson, M., and Khan, I. Collaborative Annotation of 3D Crystallographic Models. J. Chem. Inf. Model., vol. 47, no. 6, 2007 http://pubs.acs.org/cgi- bin/article.cgi/jcisd8/2007/47/i06/pdf/ci700173y.pdf |
[27] | Kadobayashi, R., et al, “3D Model Annotation from Multiple Viewpoints for Croquet”, Proceedings of the Fourth International Conference on Creating, Connecting and Collaborating through Computing (C5'06), 2006 |
[28] | Shamir, A. A survey on mesh segmentation techniques. Computer Graphics Forum, vol. 28, no. 6, 1539-1556, 2008. |
[29] | De Floriani, L., Papaleo, L., and Carissimi, N. A Java3D framework for inspecting and segmenting 3D models. In Proceedings of the 13th international Symposium on 3D Web Technology (Los Angeles, California, August 09 - 10, 2008). Web3D '08. ACM, New York, NY, 67-74. DOI= http://doi.acm.org/10.1145/1394209.1394225 |
[30] | Attene,M., Robbiano,F., Spagnuolo,M., Falcidieno,B. Part-based Annotation of Virtual 3D Shapes. In Proceedings of Cyberworlds '07, spec. sess. on NASAGEM workshop, IEEE Computer Society Press, 2007, pp. 427-436. |
[31] | Funkhouser T. et al Modeling by example. ACM Transactions on Graphics 23,3, 652–663, 2004. |
[32] | The Open Annotation Collaboration Alpha Data Mode . [Online]. Available:http://www.openannotation.org/documents/OAC-Model_UseCases-alpha.pdf |
[33] | Fu, H., Chi, Z. & Feng, D.(2006) Object Popping-out and Characterization Based on the Human Visual Mechanism, International Journal of Information Technology, vol 12, no. 5, pp. 56-64, 2006, http://www.icis.ntu.edu.sg/scs-ijit/1205/1205_7.pdf. |