Category Archives

32 Articles

Right Hemisphere awarded Patent

by Mick 0 Comments

for Graphics File Management.

With the introduction of the new Right Hemisphere 5 visual communication and collaboration software, Right Hemisphere also announced it was awarded a key patent from the U.S. Patent and Trademark Office for technology that enables next-generation enterprise communications. The issued U.S. Patent No. 7,092,974 is for a “digital asset server and asset management system.” The invention outlines an advanced graphic file management system that allows users to control the complexity, flow, and quality of enterprise-wide graphic file management and usage.

Click HERE to view the actual patent.

Right Hemisphere’s graphic file management system allows for automated and dynamic repurposing of large amounts of digital graphic data or files. This includes maintenance, use and manipulation of the graphic data or files. The system is comprised of a server which can manipulate graphic files and established links to each graphic file, and a database on which the server stores the links. The server can create other formats of a particular file and allows for amendments to graphic files to be tracked.

Click HERE to visit the Right Hemisphere web-site.

iBar

What is iBar ?

iBar is a system for the interactive design of any bar-counter. Integrated video-projectors can project any content on the milky bar-surface. The intelligent tracking system of iBar detects all objects touching the surface. This input is used to let the projected content interact dynamically with the movements on the counter. Objects can be illuminated at their position or virtual objects can be 'touched' with the fingers.

Find out more at https://www.i-bar.ch/en/info/

reactable

A combination of touch-screen and sound synthesis technology produces the reactable.

Visit the reactable web-site at https://www.iua.upf.es/mtg/reactable/

The reactable is a multi-user electro-acoustic music instrument with a tabletop tangible user interface. Several simultaneous performers share complete control over the instrument by moving physical artefacts on the table surface and constructing different audio topologies in a kind of tangible modular synthesizer or graspable flow-controlled programming language.

Also has a wikipedia entry at https://en.wikipedia.org/wiki/ReacTable

3D Printscreen

by Mick 0 Comments

3D PrintScreen from Dassault is a version of the familiar 2D print screen functionality but applied to 3D models.

It could be described as a TSR (terminate and stay ready) application which remains active, running in the background, until invoked using the F10 'hot' key (this is the default hot-key which can be changed if required) to capture 3D model geometry from another running application.

Models are saved as 3D XML from any application based on OpenGL.

Tested it out with an application used for generating meshed models. Here is the application running:

modelsrc

source application

and here is the resultant 3D XML file displayed in the 3D XML viewer

modelcapture

captured model

3D Printscreen is available for FREE from

https://www.3ds.com/products/3dvia/3d-xml/3dvia-printscreen/

where you can also download the XML Player to view your captured models.

3D Display Technology from Philips

Philips 3D Solutions have introduced a 20-inch 3D 4YOU frame-mount display. It is based on Philips WOWvx technology to provide the appearance of 3D viewing without the need for special glasses, using slanted lenticular lens screen technology and connects to a PC through a standard DVI interface.

https://www.business-sites.philips.com/3dsolutions/About/Index.html

Philips 3D Display

A sheet of transparent lenses, is fixed on an LCD screen. This sheet sends different images to each eye, and so a person sees two images. These two images are combined by our brain, to create a 3D effect. Because the sheet is transparent, it results in full brightness, full contrast and true color representation

2D_plus_depth

In order to generate a 3D image, the display requires a regular 2D representation of the image and a depth-map. This depth-map indicates the distance between each pixel and the viewer. The 2D image and the depth-map are used to create images on the screen, and these images are then merged by the viewer’s brain into a 3D sensation.

Read more about the technology in the attached document

https://www.kf12.com/blogs/techno/wp-content/uploads/philips-3d-display-technology.pdf.

Plug-ins are available for the popular 3D modelling applications that enable users to export 3D animations in the 2D-plus-depth format. Content creation tools are also available for visualizing stereoscopic video content.

The 20-inch display is available for purchase from October 2006 onwards. Also available (now) is a 42-inch version of the display.

EON Reality Inc

Quite impressed with the EON Touchlight, a bare-hand 3D interaction virtual reality display system based on an invention from Microsoft Research.

Touchlight

See the movie at

https://www.eonreality.com/video/touchlight

Image processing techniques are used to combine the output of 2 video cameras behind a semi-transparent plane in front of the user. The resulting image shows 3D objects which appear to float in space. Users interact with the displayed objects either by touching the screen or moving their hands just off the screen surface.

More details about this device are available at

https://www.eonreality.com/news/news_archive/press_releases07_18_06.htm

Read details about other products from EON at their website

https://www.eonreality.com/

including:

  • EON Sales Assistant - an authoring tool for creating sale material from 3D Models.
  • EON Display Systems - various display systems for immersive and stereoscopic viewing
  • EON Professional - an authoring tool that brings Product Lifecycle Management (PLM) product data to life with real-time photo realistic features, advanced physics engine and realistic human behaviors.

sidenote: EON Reality Inc work closely with the Digital Knowledge Exchange (DKE) who are based in Doncaster (uk). Their objective is to introduce emerging technologies to industry in Yorkshire and the UK, and have founded the Interactive Visualisation and Research Centre (IVRC) a collaborative venture with funding from Doncaster College, EON Reality Inc and the European Union (Objective 1).


Mitsubishi Research Labs (MERL)

Mitsubishi Electric Research Laboratories is the North American arm of the Mitsubishi Electric Corporation's Corporate R&D Group. A long time technical contributor to the computer graphic community, MERL conducts application-motivated research and development in computer and communication technologies. A recent development is MERL's DiamondTouch table, the first multi-user touch technology.

https://www.merl.com

Diamond Touch

The MERL DiamondTouch table is a multi-user, debris-tolerant, touch-and-gesture-activated screen for supporting small group collaboration, including gaming. It is the first touch screen that allows multiple users to interact simultaneously, and it knows who is who, making it perfect for multi-user touch-interactive arcade games.

https://www.merl.com/projects/DiamondTouch

The DiamondTouch developer's kit includes a relevant SDK.

Image Manipulation

by Mick 0 Comments

I attended the presentation of a series of papers related to Image Manipulation. Aimed primarily at tools for the digital darkroom, the papers were nonetheless interesting.

Color Harmonisation

sveta_pink_before

Input Image

sveta_pink_after

harmonized background (buildings)

Colour Harmonisation, is related to ensuring harmonization of the colours used in images using a series of Hue Harmonic Templates (for the colour hsv colour wheel). Ensuring that all the colours used in the image lie within a segment of the color hue wheel will ensure colour harmonization. (Harmonic colors are sets of colors that are aesthetically pleasing in terms of human visual perception).

Details of the paper are available at

https://www.cs.tau.ac.il/~sorkine/ProjectPages/Harmonization/.

Drag and drop pasting

Click the image to see the dragged region and then the pasted result
unfortunately 🙁 this will only work in IE (not Firefox) )

Drag and drop pasting - of one image within another (seamless image composition). It was not possible to tell that the image pasted was not part of the original image. Some more detail is available at

https://www.cse.cuhk.edu.hk//drag-and-drop_pasting.html

Two-scale Tone Management for Photographic Look

Two-scale Tone Management for Photographics Look - employed a technique to merge the appearance (tone) from one image with another.

Abstract

We introduce a new approach to tone management for photographs. Whereas traditional tone-mapping operators target a neutral and faithful rendition of the input image, we explore pictorial looks by controlling visual qualities such as the tonal balance and the amount of detail. Our method is based on a two-scale non-linear decomposition of an image. We modify the different layers based on their histograms and introduce a technique that controls the spatial variation of detail. We introduce a Poisson correction that prevents potential gradient reversal and preserves detail. In addition to directly controlling the parameters, the user can transfer the look of a model photograph to the picture being edited.

Read more at

https://people.csail.mit.edu/soonmin/photolook/

Interactive adjustment of tonal values

Interactive adjustment of tonal values

Interactive adjustment of tonal values - showed the possibility of adjusting the tone in selected parts of the image interactively. Here's a paper on the subject

https://www.cs.huji.ac.il/~danix/itm/

Image-based material editing

Image-based material editing - was a presentation of the techniques described in an earlier article in this blog, where the material of objects in images can be replaced by other materials.

https://www.kf12.com/blogs/techno/?p=310

Summary

In summary - All the above techniques are interesting in that they are done in 'real-time' as post-processing operations. Although they are applied primarily for photographic images, there is no reason, as far as I can determine, why they could not be equally applied to rendered images as post-processing operations.