Unified Patents D3d Technologies US10795457

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

PATROLL Winning Submission

U.S. Patent 10,795,457

U.S. Patent 10,795,457 (“D3D Technologies” or the “patent-at-issue”) was filed on January
24, 2018 and claims a priority date of December 28, 2006. Claim 1 of the patent-at-issue is
generally directed to a method for selecting a volume from a three-dimensional (3D) image using
a 3D cursor with non-zero volume. The 3D cursor encloses the selected volume and enables
presentation of modified versions of this volume. With this 3D cursor, orthogonal cross-sectional
image slices of the volume can be displayed where the slices are marked with the following: 1) a
corresponding visible boundary of the 3D cursor, 2) reference lines from the 3D to the orthogonal
cross-sectional imaging slices, and 3) the 3D cursor and results from a statistical analysis
performed on the contents of the 3D cursor.

The primary reference, U.S. Patent 6,297,799 (“TeraRecon”), has a filing and priority date
of November 12, 1998. The patent is directed to provide a low-cost, real-time volume rendering
system including a three-dimensional cursor for interactively identifying and marking positions
within a three-dimensional volume data set during display. In conjunction therewith, the cursor
appearance is configurable according to the preferences of the user, such that it may appear as
cross hairs, cross planes, or some combination thereof, with each cursor dimension being
selectively enabled.

The primary reference, U.S. Patent Application 2007/0046661 (“Siemens I”), has a filing
and priority date of August 31, 2005. The patent is directed to a free-hand manual navigation that
assists diagnosis using perspective rendering for three- or four-dimensional imaging. In addition
to a three-dimensional rendering for navigation, several two-dimensional images corresponding to
a virtual camera location are displayed to assist the navigation and visualization. A representation
of the virtual camera is displayed in one or more two-dimensional images, which can be moved
within the two-dimensional images to navigate for perspective rendering.

The primary reference, U.S. Patent 6,429,884 (“Siemens II”), was filed on November 22,
1999 and has a priority date of November 24, 1998. The patent is directed to a method for
presenting and processing a digital image data of an examination volume of a subject using a
pickup system, such as a medical examination installation. The examination volume or a part
thereof is played back at the display monitor as a three-dimensional image, and a volume region
of the image is defined with a marker, this volume region or the image environment thereof being
removed from the image and no longer displayed in a subsequent image presentation.

Patent Owner is now on notice that claims of this patent are invalid; as a result, any new or
continued assertion of this patent may be considered meritless or brought in bad faith. Octane
Fitness, LLC v. ICON Health & Fitness, Inc., 572 U.S. 545, 554 (2014). Such considerations are
relevant to whether a case is deemed “exceptional” for purposes of awarding attorneys’ fees. 35
U.S.C. § 285; see, e.g., WPEM, LLC v. SOTI Inc., 2020 WL 555545, at *7 (E.D. Tex. Feb. 4,
2020), aff’d, 837 F. App’x 773 (Fed. Cir. 2020) (awarding fees for an exceptional case where
plaintiff “failed to conduct an invalidity and enforceability pre-filing investigation”); Energy

1
Heating, LLC v. Heat On-The-Fly, LLC, 15 F.4th 1378, 1383 (Fed. Cir. 2021) (affirming award of
fees where, inter alia, the plaintiff knew “that its patent was invalid”).

A sample claim chart comparing claim 1 of D3D Technologies to TeraRecon, Siemens I,


and Siemens II is provided below.

2
A. US6297799 (“TeraRecon”)
US10795457 (“ D3D Technologies ”) B. US20070046661 (“Siemens I”)
C. US6429884 (“Siemens II”)
1. A method comprising: A. US6297799
“A schematic representation of an embodiment of the 3-D
providing a three-dimensional cursor modulation unit 200 is illustrated in FIG. 8. FIG. 9
cursor that has a non-zero volume; illustrates a two dimensions of a three-dimensional cursor
which may be realized by the presently disclosed cursor
modulation unit, the third dimension being perpendicular to
the page.” TeraRecon at 10:25-30.

“One input to the CURSOR_CONTROL register 202 is


cursorPosition, a definition of the center point of the cursor,
defined in the coordinate system 11 in which the object to be
rendered is defined. The center point is defined as a cubic
volume 300 of one unit of length per side, and is referred
to herein as the center point volume.” TeraRecon at 10:36-
41.

“For instance, in FIG. 9, each component of the two-


dimensional cursor extends the same length on each side of
the origin volume. . . By simple extension, a three-
dimensional cross hair cursor would provide a z-
component of the same dimensions. Note that if both
cursorLength and cursorWidth were set to zero, the cursor
would still be non-zero, occupying the cursor origin
volume. Therefore, by defining the cursor center point
location and common length and width dimensions, a
three-dimensional cursor can be described.” TeraRecon at
10:59-11:7.

B. US20070046661
“FIGS. 2 and 3 show perspective three-dimensional rendering
of a three-dimensional image 34. Ray lines 30 extend from a
position 32 of a virtual camera. The ray lines 30 diverge
from the position 32 over a field of view. The user or a
processor selects the field of view. The field of view is 90
degrees pyramid or cone in one embodiment, but may
have other extents or shapes. The field of view covers or
extends through a subset of the data representing the scanned
volume.” Siemens I at [0022].

C. US6429884
“FIGS. 3 through 7 show only the image presentations
reproduced on the display monitor 7. These are respective

3
(cont.) projection images wherein structures are three-dimensionally
1. A method comprising: presented, i.e. or e can effectively look through the displayed
volume and view body parts located in the volume, with parts
providing a three-dimensional lying farther toward the back being covered by parts lying in
cursor that has a non-zero volume; front of them. In order to eliminate such image portions that
disturb the view onto body parts actually of interest, for
example a specific bone, a vessel or the like—see FIG. 3—, a
third marker M3 in the form of a closed line is mixed in
on the image B when the user gives a corresponding
command, a volume region extending into the plane of
presentation being defined with this marker M3. The
marker M3 can be moved with the control unit 8 or can
be drawn with a cursor. The image processor 6 then
determines the digital image dataset belonging to this defined
volume region VB, i.e. this region is defined in terms of
image data.” Siemens II at 4:28-45.

selecting a volume of a three- A. US6297799


dimensional image designated by the “As previously noted, one of the modulation units 126 (FIG.
three-dimensional cursor, wherein 5) which is optionally employed between the illumination
the three-dimensional cursor encloses unit 122 and the compositing unit 124 is a three-dimensional
the volume of the three-dimensional (3-D) cursor generation unit. The 3-D cursor unit allows the
image; insertion of a hardware generated, software controlled
cursor into the volume data set being rendered, where the
cursor rotates as the object itself is rotated.” TeraRecon at
9:53-59.

“1. Apparatus for generating a cursor within a rendering of a


three-dimensional volume, comprising:

multiple pipelined stages for enabling real-time generation of


said rendering, said multiple pipelined stages comprising a
modulation unit for defining the extent of said cursor within
said volume and for determining whether a sample point
within said volume is within said cursor extent. . . .”
TeraRecon at claim 1.

B. US20070046661
“A user navigates a virtual camera through and inside a
body cavity. The method uses planar or three-dimensional
rendering for navigation, reducing editing. The navigation is
intuitive, accelerating the workflow. Different views may be

4
(cont.) provided, such as perspective views of different parts of a
selecting a volume of a three- cavity.” Siemens I at [0018].
dimensional image designated by the
three-dimensional cursor, wherein “FIGS. 2 and 3 show perspective three-dimensional rendering
the three-dimensional cursor encloses of a three-dimensional image 34. Ray lines 30 extend from a
the volume of the three-dimensional position 32 of a virtual camera. The ray lines 30 diverge from
image; the position 32 over a field of view. The user or a processor
selects the field of view. The field of view is 90 degrees
pyramid or cone in one embodiment, but may have other
extents or shapes. The field of view covers or extends
through a subset of the data representing the scanned
volume.” Siemens I at [0022].

C. US6429884

“FIGS. 3 through 7 show only the image presentations


reproduced on the display monitor 7. These are respective
projection images wherein structures are three-dimensionally
presented, i.e. or e can effectively look through the displayed
volume and view body parts located in the volume, with parts
lying farther toward the back being covered by parts lying in
front of them. In order to eliminate such image portions that
disturb the view onto body parts actually of interest, for
example a specific bone, a vessel or the like—see FIG. 3—, a
third marker M3 in the form of a closed line is mixed in on
the image B when the user gives a corresponding
command, a volume region extending into the plane of
presentation being defined with this marker M3. The

5
(cont.) marker M3 can be moved with the control unit 8 or can be
selecting a volume of a three- drawn with a cursor. The image processor 6 then determines
dimensional image designated by the the digital image dataset belonging to this defined volume
three-dimensional cursor, wherein region VB, i.e. this region is defined in terms of image
the three-dimensional cursor encloses data.” Siemens II at 4:28-45.
the volume of the three-dimensional
image;

presenting a modified version of the A. US6297799


selected volume of the three- “At a basic level, the 3-D cursor modulation unit compares
dimensional image; and the location of a sample being processed with a definition of
the location of a 3-D cursor. If there is no overlap, the RGBA
value assigned to the sample proceeds unmodified to other
modulation units, if any, then to the compositing unit 124. If
there is overlap between the sample and the cursor, then
the RGBA value of the sample may be effected by the
RGBA definition of the cursor at that location, as will be
discussed in detail below. The modified sample is then
passed on to the next modulation unit, if any, and then to the
compositing unit.” TeraRecon at 10:4-14.
(col. 10, lines 4-14)

B. US20070046661
“In act 10, a perspective three-dimensional medical image
is rendered from a virtual camera within a scanned
volume. Data representing the scanned volume is
acquired and used in real-time or acquired and stored for
later use. The data represents different scan lines, planes or
other scan formats within the volume. The data is formatted
based on the scan or reformatted into a three-dimensional
data set.” Siemens I at [0020].

“The data along the ray lines 30 determines the pixel values
for the three-dimensional image. Maximum, minimum,
average, or other projection rendering techniques may be
used. Shading, opacity control or other now known or later
developed volume rendering effects may be used.
Alternatively, surface rendering from a perspective view is
provided, such as surface rendering within the field of
view defined by the extent of the ray lines 30. Rather than
rendering one three-dimensional image for the position 32,
two three-dimensional medical images are rendered. By using
two sets of ray lines 30 offset from each other, the rendering
is in stereo. Stereo views may enhance depth perception,
making the 3D navigation more intuitive. Whether stereo or

6
(cont.) not, the perspective three-dimensional rendered image 34
presenting a modified version of the appears as a picture seen through the virtual camera viewing
selected volume of the three- window. The closer the image or images structure to the
dimensional image; and camera, the larger the structure appears.” Siemens I at [0023].

C. US6429884
“When the image B is located in the desired position, the
marker M3—as shown in FIG. 6—is again mixed in by the
control unit 8, i.e. a volume region VB is again defined. As
can be seen from FIG. 6, the marker M3 can also include a
region lying outside the displayed examination volume. After
acquisition and definition of this volume region, the
appertaining image dataset is also determined here by the
image processor 6 and the corresponding image excerpt is
clipped, see FIG. 6. Again, the corresponding, disturbing
image excerpt was thus also cut out; the view onto volume
parts lying therebehind is no longer impeded. The
finished image B can then be stored in the memory 13.”
Siemens II at 4:57-5:2.
displaying: orthogonal cross- B. US20070046661
sectional imaging slices, wherein the “In act 12 of FIG. 1, one or more two-dimensional images
slices are marked with a 36 (see FIG. 3) are generated. The two-dimensional
corresponding visible boundary of images 36 correspond to respective two-dimensional
the three-dimensional cursor, planes in the scanned volume. For example, three two-
reference lines from the three- dimensional images 36 correspond to two or three
dimensional cursor to the orthogonal planes through the volume. The data in the
orthogonal cross-sectional imaging scanned volume intersecting the plane or adjacent to the
slices, the three-dimensional cursor plane is selected and used to generate a two-dimensional
and results from a statistical analysis image 36. Other multi-planar renderings may be used with or
performed on the contents of the without orthogonal or perpendicular planes.” Siemens I at
three-dimensional cursor. [0027].

7
(cont.) “The two-dimensional images 36 include a representation
displaying: orthogonal cross- 38 of the position 32 of the virtual camera. One, more, or
sectional imaging slices, wherein the all of the two-dimensional images 36 include the
slices are marked with a representation 38. Any representation may be used. In FIG. 3,
corresponding visible boundary of the representation is a dot, such as a dot colored to provide
the three-dimensional cursor, contrast within the images 36. A camera icon, an intersection
reference lines from the three- of lines, an arrow or other representation may be used. The
dimensional cursor to the orthogonal representation 38 indicates the position 32 to the user. The
cross-sectional imaging slices, the representation includes or does not include additional
three-dimensional cursor and results graphics in other embodiments. For example, dashed
from a statistical analysis performed lines or a shaded field represents the field of view.”
on the contents of the three- Siemens I at [0030].
dimensional cursor.
C. US6429884
“FIGS. 3 through 7 show only the image presentations
reproduced on the display monitor 7. These are respective
projection images wherein structures are three-
dimensionally presented, i.e. or e can effectively look
through the displayed volume and view body parts located in
the volume, with parts lying farther toward the back being
covered by parts lying in front of them.” Siemens II at 4:28-
34.

“FIG. 2 shows an example of the image presentation and


image processing. Only the apparatus 5 is shown. Two
overall images G1 and G2 are shown at the display
monitor 7, these representing the examination volume of
the subject O registered with the image pickup system 2.
This can be, for example, a slice from the head of a patient,
with a view onto the slice from below toward the head

8
(cont.) being shown in the overall image G1, and a view onto the
displaying: orthogonal cross- slice shown in G1 from the front is shown in image G2.
sectional imaging slices, wherein the Marker M1, M2 each in the form of a line forming a
slices are marked with a rectangle are superimposed on the respective images G1,
corresponding visible boundary of G2. A region of interest, namely the region located inside the
the three-dimensional cursor, rectangle, can be clipped from the overall image presentation
reference lines from the three- with these markers M1, M2. The respective markers M1,
dimensional cursor to the orthogonal M2 are variable with the control means 8, i.e. they can be
cross-sectional imaging slices, the displaced and varied in shaped and/or size.” Siemens II at
three-dimensional cursor and results 4:1-20.
from a statistical analysis performed
on the contents of the three-
dimensional cursor.

You might also like