Unified Patents D3d Technologies US10795457
Unified Patents D3d Technologies US10795457
Unified Patents D3d Technologies US10795457
U.S. Patent 10,795,457 (“D3D Technologies” or the “patent-at-issue”) was filed on January
24, 2018 and claims a priority date of December 28, 2006. Claim 1 of the patent-at-issue is
generally directed to a method for selecting a volume from a three-dimensional (3D) image using
a 3D cursor with non-zero volume. The 3D cursor encloses the selected volume and enables
presentation of modified versions of this volume. With this 3D cursor, orthogonal cross-sectional
image slices of the volume can be displayed where the slices are marked with the following: 1) a
corresponding visible boundary of the 3D cursor, 2) reference lines from the 3D to the orthogonal
cross-sectional imaging slices, and 3) the 3D cursor and results from a statistical analysis
performed on the contents of the 3D cursor.
The primary reference, U.S. Patent 6,297,799 (“TeraRecon”), has a filing and priority date
of November 12, 1998. The patent is directed to provide a low-cost, real-time volume rendering
system including a three-dimensional cursor for interactively identifying and marking positions
within a three-dimensional volume data set during display. In conjunction therewith, the cursor
appearance is configurable according to the preferences of the user, such that it may appear as
cross hairs, cross planes, or some combination thereof, with each cursor dimension being
selectively enabled.
The primary reference, U.S. Patent Application 2007/0046661 (“Siemens I”), has a filing
and priority date of August 31, 2005. The patent is directed to a free-hand manual navigation that
assists diagnosis using perspective rendering for three- or four-dimensional imaging. In addition
to a three-dimensional rendering for navigation, several two-dimensional images corresponding to
a virtual camera location are displayed to assist the navigation and visualization. A representation
of the virtual camera is displayed in one or more two-dimensional images, which can be moved
within the two-dimensional images to navigate for perspective rendering.
The primary reference, U.S. Patent 6,429,884 (“Siemens II”), was filed on November 22,
1999 and has a priority date of November 24, 1998. The patent is directed to a method for
presenting and processing a digital image data of an examination volume of a subject using a
pickup system, such as a medical examination installation. The examination volume or a part
thereof is played back at the display monitor as a three-dimensional image, and a volume region
of the image is defined with a marker, this volume region or the image environment thereof being
removed from the image and no longer displayed in a subsequent image presentation.
Patent Owner is now on notice that claims of this patent are invalid; as a result, any new or
continued assertion of this patent may be considered meritless or brought in bad faith. Octane
Fitness, LLC v. ICON Health & Fitness, Inc., 572 U.S. 545, 554 (2014). Such considerations are
relevant to whether a case is deemed “exceptional” for purposes of awarding attorneys’ fees. 35
U.S.C. § 285; see, e.g., WPEM, LLC v. SOTI Inc., 2020 WL 555545, at *7 (E.D. Tex. Feb. 4,
2020), aff’d, 837 F. App’x 773 (Fed. Cir. 2020) (awarding fees for an exceptional case where
plaintiff “failed to conduct an invalidity and enforceability pre-filing investigation”); Energy
1
Heating, LLC v. Heat On-The-Fly, LLC, 15 F.4th 1378, 1383 (Fed. Cir. 2021) (affirming award of
fees where, inter alia, the plaintiff knew “that its patent was invalid”).
2
A. US6297799 (“TeraRecon”)
US10795457 (“ D3D Technologies ”) B. US20070046661 (“Siemens I”)
C. US6429884 (“Siemens II”)
1. A method comprising: A. US6297799
“A schematic representation of an embodiment of the 3-D
providing a three-dimensional cursor modulation unit 200 is illustrated in FIG. 8. FIG. 9
cursor that has a non-zero volume; illustrates a two dimensions of a three-dimensional cursor
which may be realized by the presently disclosed cursor
modulation unit, the third dimension being perpendicular to
the page.” TeraRecon at 10:25-30.
B. US20070046661
“FIGS. 2 and 3 show perspective three-dimensional rendering
of a three-dimensional image 34. Ray lines 30 extend from a
position 32 of a virtual camera. The ray lines 30 diverge
from the position 32 over a field of view. The user or a
processor selects the field of view. The field of view is 90
degrees pyramid or cone in one embodiment, but may
have other extents or shapes. The field of view covers or
extends through a subset of the data representing the scanned
volume.” Siemens I at [0022].
C. US6429884
“FIGS. 3 through 7 show only the image presentations
reproduced on the display monitor 7. These are respective
3
(cont.) projection images wherein structures are three-dimensionally
1. A method comprising: presented, i.e. or e can effectively look through the displayed
volume and view body parts located in the volume, with parts
providing a three-dimensional lying farther toward the back being covered by parts lying in
cursor that has a non-zero volume; front of them. In order to eliminate such image portions that
disturb the view onto body parts actually of interest, for
example a specific bone, a vessel or the like—see FIG. 3—, a
third marker M3 in the form of a closed line is mixed in
on the image B when the user gives a corresponding
command, a volume region extending into the plane of
presentation being defined with this marker M3. The
marker M3 can be moved with the control unit 8 or can
be drawn with a cursor. The image processor 6 then
determines the digital image dataset belonging to this defined
volume region VB, i.e. this region is defined in terms of
image data.” Siemens II at 4:28-45.
B. US20070046661
“A user navigates a virtual camera through and inside a
body cavity. The method uses planar or three-dimensional
rendering for navigation, reducing editing. The navigation is
intuitive, accelerating the workflow. Different views may be
4
(cont.) provided, such as perspective views of different parts of a
selecting a volume of a three- cavity.” Siemens I at [0018].
dimensional image designated by the
three-dimensional cursor, wherein “FIGS. 2 and 3 show perspective three-dimensional rendering
the three-dimensional cursor encloses of a three-dimensional image 34. Ray lines 30 extend from a
the volume of the three-dimensional position 32 of a virtual camera. The ray lines 30 diverge from
image; the position 32 over a field of view. The user or a processor
selects the field of view. The field of view is 90 degrees
pyramid or cone in one embodiment, but may have other
extents or shapes. The field of view covers or extends
through a subset of the data representing the scanned
volume.” Siemens I at [0022].
C. US6429884
5
(cont.) marker M3 can be moved with the control unit 8 or can be
selecting a volume of a three- drawn with a cursor. The image processor 6 then determines
dimensional image designated by the the digital image dataset belonging to this defined volume
three-dimensional cursor, wherein region VB, i.e. this region is defined in terms of image
the three-dimensional cursor encloses data.” Siemens II at 4:28-45.
the volume of the three-dimensional
image;
B. US20070046661
“In act 10, a perspective three-dimensional medical image
is rendered from a virtual camera within a scanned
volume. Data representing the scanned volume is
acquired and used in real-time or acquired and stored for
later use. The data represents different scan lines, planes or
other scan formats within the volume. The data is formatted
based on the scan or reformatted into a three-dimensional
data set.” Siemens I at [0020].
“The data along the ray lines 30 determines the pixel values
for the three-dimensional image. Maximum, minimum,
average, or other projection rendering techniques may be
used. Shading, opacity control or other now known or later
developed volume rendering effects may be used.
Alternatively, surface rendering from a perspective view is
provided, such as surface rendering within the field of
view defined by the extent of the ray lines 30. Rather than
rendering one three-dimensional image for the position 32,
two three-dimensional medical images are rendered. By using
two sets of ray lines 30 offset from each other, the rendering
is in stereo. Stereo views may enhance depth perception,
making the 3D navigation more intuitive. Whether stereo or
6
(cont.) not, the perspective three-dimensional rendered image 34
presenting a modified version of the appears as a picture seen through the virtual camera viewing
selected volume of the three- window. The closer the image or images structure to the
dimensional image; and camera, the larger the structure appears.” Siemens I at [0023].
C. US6429884
“When the image B is located in the desired position, the
marker M3—as shown in FIG. 6—is again mixed in by the
control unit 8, i.e. a volume region VB is again defined. As
can be seen from FIG. 6, the marker M3 can also include a
region lying outside the displayed examination volume. After
acquisition and definition of this volume region, the
appertaining image dataset is also determined here by the
image processor 6 and the corresponding image excerpt is
clipped, see FIG. 6. Again, the corresponding, disturbing
image excerpt was thus also cut out; the view onto volume
parts lying therebehind is no longer impeded. The
finished image B can then be stored in the memory 13.”
Siemens II at 4:57-5:2.
displaying: orthogonal cross- B. US20070046661
sectional imaging slices, wherein the “In act 12 of FIG. 1, one or more two-dimensional images
slices are marked with a 36 (see FIG. 3) are generated. The two-dimensional
corresponding visible boundary of images 36 correspond to respective two-dimensional
the three-dimensional cursor, planes in the scanned volume. For example, three two-
reference lines from the three- dimensional images 36 correspond to two or three
dimensional cursor to the orthogonal planes through the volume. The data in the
orthogonal cross-sectional imaging scanned volume intersecting the plane or adjacent to the
slices, the three-dimensional cursor plane is selected and used to generate a two-dimensional
and results from a statistical analysis image 36. Other multi-planar renderings may be used with or
performed on the contents of the without orthogonal or perpendicular planes.” Siemens I at
three-dimensional cursor. [0027].
7
(cont.) “The two-dimensional images 36 include a representation
displaying: orthogonal cross- 38 of the position 32 of the virtual camera. One, more, or
sectional imaging slices, wherein the all of the two-dimensional images 36 include the
slices are marked with a representation 38. Any representation may be used. In FIG. 3,
corresponding visible boundary of the representation is a dot, such as a dot colored to provide
the three-dimensional cursor, contrast within the images 36. A camera icon, an intersection
reference lines from the three- of lines, an arrow or other representation may be used. The
dimensional cursor to the orthogonal representation 38 indicates the position 32 to the user. The
cross-sectional imaging slices, the representation includes or does not include additional
three-dimensional cursor and results graphics in other embodiments. For example, dashed
from a statistical analysis performed lines or a shaded field represents the field of view.”
on the contents of the three- Siemens I at [0030].
dimensional cursor.
C. US6429884
“FIGS. 3 through 7 show only the image presentations
reproduced on the display monitor 7. These are respective
projection images wherein structures are three-
dimensionally presented, i.e. or e can effectively look
through the displayed volume and view body parts located in
the volume, with parts lying farther toward the back being
covered by parts lying in front of them.” Siemens II at 4:28-
34.
8
(cont.) being shown in the overall image G1, and a view onto the
displaying: orthogonal cross- slice shown in G1 from the front is shown in image G2.
sectional imaging slices, wherein the Marker M1, M2 each in the form of a line forming a
slices are marked with a rectangle are superimposed on the respective images G1,
corresponding visible boundary of G2. A region of interest, namely the region located inside the
the three-dimensional cursor, rectangle, can be clipped from the overall image presentation
reference lines from the three- with these markers M1, M2. The respective markers M1,
dimensional cursor to the orthogonal M2 are variable with the control means 8, i.e. they can be
cross-sectional imaging slices, the displaced and varied in shaped and/or size.” Siemens II at
three-dimensional cursor and results 4:1-20.
from a statistical analysis performed
on the contents of the three-
dimensional cursor.