Research > Tech Reports

As of 2012, School of Computing Science technical
reports are being filed on arXiv,
which is supported by Cornell University. Please visit our main research publications page:

http://www.cs.sfu.ca/research/tech-reports-thesis.html

for a link to our technical reports on arXiv and for instructions
on how to submit a technical report.

Our pre-arXiv technical reports are listed below, many
with links to an online version.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Booshehrian et al., 2011]**

Author(s): Maryam Booshehrian, Torsten Möller, Randall M. Peterman, Tamara Munzer.

Title: Vismon: Facilitating Risk Assessment and Decision Making In Fisheries Management.

Technical Report: TR 2011-05, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, October 4, 2011.

Abstract:

**[****Bergner et al., 2011]**

Author(s): Steven Bergner, Matt Crider, Arthur E. Kirkpatrick, Torsten Möller.

Title: Mixing Board Versus Mouse Interaction In Value Adjustment Tasks.

Technical Report: TR 2011-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, October 2, 2011.

Abstract: We present a controlled, quantitative study with 12 participants comparing interaction with a haptically enhanced mixing board against interaction with a mouse in an abstract task that is motivated by several practical parameter space exploration settings. The study participants received 24 sets of one to eight integer values between 0 and 127, which they had to match by making adjustments with physical or graphical sliders, starting from a default position of 64. Based on recorded slider motion path data, we developed an analysis algorithm that identifies and measures different types of activity intervals, including error time moving irrelevant sliders and end time in breaks after completing each trial item. This decomposition facilitates data cleaning and more selective outlier removal, which is adequate for the small sample size. Our results showed a significant increase in speed of the mixing board interaction accompanied by reduced perceived cognitive load when compared with the traditional mouse-based GUI interaction. We noticed that the gains in speed are largely due to the improved times required for the hand to reach for the first slider (acquisition time) and also when moving between different ones, while the actual time spent manipulating task-relevant sliders is very similar for either input device. These results agree strongly with qualitative predictions from Fitts' Law that the larger targets afforded by the mixer handles contributed to its faster performance. Our study confirmed that the advantage of the mixing board acquisition times and between times increases, as more sliders need to be manipulated in parallel. To investigate this further, we computed a measure of motion simultaneity based on velocity correlation. This enabled us to identify types of items for which increased simultaneous adjustments occur. In the context of continuous parameter space exploration our findings suggest that mixing boards are a considerable option that provides a detailed multi-value control. The strengths of this input method should particularly show in settings, where screen space is precious and undisrupted visual focus is crucial.

**[Andrews et al., 2011]**

Author(s): Shawn Andrews, Chris McIntosh, and Ghassan Hamarneh.

Title: . Convex multi-region probabilistic segmentation with shape prior in the isometric log-ratio transformation space.

Technical Report: TR 2011-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, July 2011.

Abstract: Image segmentation is often performed via the minimization of an energy function over a domain of possible segmentations. The effectiveness and applicability of such methods depends greatly on the properties of the energy function and its domain, and on what information can be encoded by it. Here we propose an energy function that achieves several important goals. Specifically, our energy function is convex and incorporates shape prior information while simultaneously generating a probabilistic segmentation for multiple regions. Our energy function represents multi- region probabilistic segmentations as elements of a vector space using the isometric log-ratio (ILR) transformation. To our knowledge, these four goals (convex, with shape priors, multi-region, and probabilistic) do not exist together in any other method, and this is the first time ILR is used in an image segmentation method. We provide examples demonstrating the usefulness of these features.**[Khosravi et al., 2011]**

Author(s): Hassan Khosravi, Oliver Schulte, and Martin Ester.

Title: . Parametrized bayes nets for modelling database distributions.

Technical Report: TR 2011-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2011.

Abstract: Many databases store data in relational format, with different types of entities and information about links between the entities. Poole introduced Parametrized Bayes nets (PBNs) as a graphical model for relational data. Their main use has been for first-order probabilistic inference, where the goal is to predict facts about individual entities. In this paper we utilize PBNs for a new task: Learning class-level or first-order dependencies, which model the general database statistics over attributes of linked objects and links (e.g., the percentage of A grades given in computer science classes). Class-level statistical relationships are important in themselves, and they support applications like policy making, strategic planning, and query optimization. We describe efficient structure and parameter learning algorithms that construct a PBN model for a given input database. An evaluation of our methods on three data sets shows that they are computationally feasible for realistic table sizes, and that the learned structures represent the statistical information in the databases well. After learning compiles the database statistics into a parametrized Bayes net, querying these statistics via Bayes net inference is faster than with SQL queries, and does not depend on the size of the database.**[Torsney-Weir et al., 2011]**

Author(s): Thomas Torsney-Weir, Ahmed Saad, Torsten Möller, Britta Weber, Hans-Christian Hege, and Jean-Marc Verbavatz.

Title: . PReSM: Principled parameter finding for image segmentation algorithms using visual response surface exploration.

Technical Report: TR 2011-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, Mar 2011.

Abstract: In this paper we address the difficult problem of parameter-finding in image segmentation. We replace a tedious manual process, that is often times based on guess-work and luck, by a principled approach that systematically explores the parameter space. Our core idea is the following two-stage technique: We start with a sparse sampling of the parameter space and apply a statistical model to estimate the response of the segmentation algorithm. The statistical model incorporates a model of uncertainty of the estimation which we use in conjunction with the actual estimate in (visually) guiding the user towards areas that need refinement by placing additional sample points. In the second stage the user navigates through the parameter space in order to determine areas where the response value (goodness of segmentation) is high. In our exploration we rely on existing ground-truth images in order to evaluate the 'goodness' of an image segmentation technique. We evaluate its usefulness by demonstrating this technique on two image segmentation algorithms: a three parameter model to detect microtubules in electron tomograms and an eight parameter model to identify functional regions in dynamic Positron Emission Tomography scans.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Andrews et al., 2010]**

Author(s): Shawn Andrews, Ghassan Hamarneh, and Ahmed Saad.

Title: . Fast random walker with priors using precomputation for interactive medical image segmentation.

Technical Report: TR 2010-07, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2010.

Abstract: Updating segmentation results in real-time based on repeated user input is a reliable way to guarantee accuracy, paramount in medical imaging applications, while making efficient use of an expert's time. The random walker algorithm with priors is a robust method able to find a globally optimal probabilistic segmentation with an intuitive method for user input. However, like many other segmentation algorithms, it can be too slow for real-time user interaction. We propose a speedup to this popular algorithm based on offline precomputation, taking advantage of the time images are stored on servers prior to an analysis session. Our results demonstrate the benefits of our approach. For example, the segmentations found by the original random walker and by our new precomputation method for a given 3D image have a Dice's similarity coefficient of 0.975, yet our method runs in 1/25 th of the time.**[Cameron et al., 2010]**

Author(s): Robert D. Cameron, Ehsan Amiri, Kenneth S. Herdy, Dan Lin, Thomas C. Shermer, and Fred P. Popowich.

Title: . Parallel parsing with bitstream addition: An xml case study.

Technical Report: TR 2010-11, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, October 2010.

Abstract: A parallel parsing method using the concept of bitstream addition is introduced and studied in application to the problem of XML parsing and well-formedness checking. On processors supporting W-bit addition operations, the method can perform up to W finite state transitions per clock cycle. The method is based on the concept of parallel bitstream technology, in which parallel streams of bits are formed such that each stream comprises bits in one-to-one correspondence with the character code units of a source data stream. Parsing routines are initially prototyped in Python using its native support for unbounded integers to represent arbitrary-length bitstreams. We then introduce a compiler that can translate the Python code into low-level C-based implementations. These low-level implementations can then take advantage of the SIMD (single-instruction multiple-data) capabilities of commodity processors. Performance results show a dramatic speed-up over traditional alternatives employing byte-at-a-time parsing.**[Changizi and Hamarneh, 2010]**

Author(s): Neda Changizi and Ghassan Hamarneh.

Title: . Probabilistic multi-shape representation using an isometric log-ratio mapping.

Technical Report: TR 2010-05, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2010.

Abstract: Several sources of uncertainties in shape boundaries in medical images have motivated the use of probabilistic labeling approaches. Although it is well known that the sample space for the probabilistic representation of a pixel is the unit simplex, standard techniques of statistical shape analysis (e.g. principal component analysis) have been applied to probabilistic data as if they lie in the unconstrained real Euclidean space. Since these techniques are not constrained to the geometry of the simplex, the statistically feasible data produced end up representing invalid (out of the simplex) shapes. By making use of methods for dealing with what is known as compositional or closed data, we propose a new framework intrinsic to the unit simplex for statistical analysis of probabilistic multi-shape anatomy. In this framework, the isometric log-ratio (ILR) transformation is used to isometrically and bijectively map the simplex to the Euclidean real space, where data are analyzed in the same way as unconstrained data and then back-transformed to the simplex. We demonstrate favorable properties of ILR over existing mappings (e.g. LogOdds). Our results on synthetic and brain data exhibit a more accurate statistical analysis of probabilistic shapes.**[Drew and Finlayson, 2010]**

Author(s): Mark S. Drew and Graham D. Finlayson.

Title: . Improvement of colorization realism via the structure tensor.

Technical Report: TR 2010-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, Feb. 2010.

Abstract: Colorization is a user-assisted colour manipulation mechanism for changing greyscale images into coloured ones. Several colorization algorithms have been constructed, and these methods are able to produce appropriately colorized images given a surprisingly sparse set of hints supplied by the user. But these colour images may not in fact look realistic. Moreover, the contrast in the colorized image may not match the gradient perceived in the original greyscale image. We argue that it is this departure from the original gradient that contributes to the un-real appearance in some colorizations. To correct this, we make use of the Di Zenzo gradient of a colour image derived from the structure tensor, and adjust the colorized correlate such that the Di Zenzo definition of the maximum-contrast gradient agrees with the gradient in the original grey image. We present a heuristic method to this end and guided by this approach devise an optimization-based method. Our gradient projection tends to result in more natural-looking images in the resulting adjusted colorization. To explore the proposed method we utilize minimalist sets of colour hints and find in particular that 'hotspots' of un-realistic colour are subdued into regions of more realistic colour. This paper is not aimed at introducing a basic colorization but instead our method makes any colorization look more realistic; we demonstrate that this is the case for several different basic methods. In fact, we even find that a very simplistic colorization algorithm can be used provided the projection proposed here is then used to make the colorization more realistic looking.**[Finlayson et al., 2010]**

Author(s): Graham D. Finlayson, David Connah, and Mark S. Drew.

Title: . Lookup-table based gradient field reconstruction.

Technical Report: TR 2010-10, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, Aug 2010.

Abstract: In computer vision there are many applications where it is advantageous to process an image in the gradient domain and then re-integrate the gradient field: important examples include shadow removal, HDR imaging, lightness calculation and data fusion. A serious problem with this approach is that the reconstruction step often introduces artefacts – commonly, smoothed and smeared edges – to the recovered image. This is a result of the inherent ill-posedness of re-integrating a non-integrable field. Artefacts can be diminished, but not removed, by using complex to highly complex re-integration techniques.

Here we present a remarkably simple (and on the face of it naive) algorithm for reconstructing gradient fields. Suppose we start with a multichannel original, and from it derive a (possibly one of many) 1D gradient field. For many applications the derived gradient field will be non-integrable. Here we create a lookup-table based map relating the multichannel original to a reconstructed scalar output image whose gradient best matches the target gradient field. While this map could take a variety of forms, here we derive the best map from the multichannel gradient as a (nonlinear) function of the input to each of the target scalar gradients. In this framework, reconstruction is a simple equation-solving exercise of low dimensionality: the best mapping taking gradients of nonlinear functions of multichannel input into the desired gradient is*again used as the mapping in the image (not the gradient) domain*.

One obvious application of our method is to the image-fusion problem, e.g. the problem of converting a colour or higher-D image into greyscale. As a theoretical motivation for the method we prove that, for this problem, if it is possible to re-integrate the gradient field, then the re-integration must perforce be a lookup-table mapping. Most images will not exactly fulfil this paradigm, but the theorem serves to justify our approach. We then show, through extensive experiments, that our straightforward method preserves the target contrast as well as do complex previous re-integration methods *but without artefacts*, and with a substantially cheaper computational cost. Finally, we demonstrate the generality of the method by applying it to gradient field reconstruction in an additional area, the shading recovery problem.**[Liu, 2010]**

Author(s): Rong Liu.

Title: . Sampling criteria for nyström method.

Technical Report: TR 2010-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, Feb 2010.

Abstract: Kernel related algorithms, including spectral techniques, have found their use in a variety of fields. However, the construction and eigen-decomposition of the Gram matrix necessitate efforts to reduce the computational overhead. One particularly interesting approach used for this purpose is the Nyström method. In this report, we study the sampling problem for the Nyström method. To measure the quality of a sampling set, we consider two related but different criteria: the difference between the original and the reconstructed Gram matrix, and the difference between the accurate and the approximate eigenvectors of the Gram matrix, each of which leads to an objective function. Based on these two objective functions, two greedy sampling schemes are proposed and their effectiveness are verified empirically.**[Mirzaalian et al., 2010]**

Author(s): Hengameh Mirzaalian, Ghassan Hamarneh, and Tim K. Lee.

Title: . Spatial normalization of human back images for dermatological studies.

Technical Report: TR 2010-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2010.

Abstract: Density of pigmented skin lesions (PSL) is a strong predictor of malignant melanoma. Some dermatologists advocate periodic full-body scan for high-risk patients. It is clinically important to compare and detect changes in the number and appearance of PSL across time. However, manual inspection and matching of PSL is tedious, error-prone, and suffers from inter-rater variability. Therefore, an automatic method for tracking corresponding PSL would have significant health benefits. In order to automate the tracking of PSL in human back images, we must perform spatial normalization of the coordinates of each PSL, as is done in brain atlases. We propose the first human back template (atlas) to obtain this normalization. Four pairs of anatomically meaningful landmarks (neck, shoulder, armpit and hip points) are used as reference points on the skin-back image. Using the landmarks, a grid with longitudes and latitudes is constructed and overlaid on each subject specific back image. To perform spatial normalization, the grid is registered into the skin-back template, a unit-square rectilinear grid. We demonstrate the benefits of our approach, on 56 pairs of real dermatological images, through the increased accuracy of PSL matching algorithms when our anatomy-based normalized coordinates are used.**[Rao et al., 2010]**

Author(s): Josna Rao, Rafeef Abugharbieh, and Ghassan Hamarneh.

Title: . Adaptive regularization for image segmentation using local image curvature cues.

Technical Report: TR 2010-08, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2010.

Abstract: Image segmentation techniques typically require proper weighting of competing data fidelity and regularization terms. Conventionally, the associated parameters are set through tedious trial and error procedures and kept constant over the image. However, spatially varying structural characteristics, such as object curvature, combined with varying noise and imaging artifacts, significantly complicate the selection process of segmentation parameters. In this work, we propose a novel approach for automating the parameter selection by employing a robust structural cue to prevent excessive regularization of trusted (i.e. low noise) high curvature image regions. Our approach autonomously adapts local regularization weights by combining local measures of image curvature and edge evidence that are gated by a signal reliability measure. We demonstrate the utility and favorable performance of our approach within two major segmentation frameworks, graph cuts and active contours, and present quantitative and qualitative results on a variety of natural and medical images.**[Schulte, 2010]**

Author(s): Oliver Schulte.

Title: . Measuring data fit for bayes nets applied to relational data.

Technical Report: TR 2010-09, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, August 2010.

Abstract: Bayes nets (BNs) for relational databases are a major research topic in machine learning and artificial intelligence. When cyclic dependencies exist, measuring the fit of a BN model to relational data with a likelihood function is a challenge [4, 36, 26, 7]. A common approach to difficulties in defining a likelihood function is to employ a pseudo-likelihood; a prominent example is the pseudo likelihood defined for Markov Logic Networks (MLNs). This paper proposes a new pseudo likelihood*P*for Parametrized Bayes Nets (PBNs) [30]. The pseudo log-likelihood^{*}*L*is similar to the single-table BN log-likelihood, where row counts in the data table are replaced by frequencies in the database. We introduce a new type of semantics based on the concept of random instantiations (groundings) from classic AI research [10,1]: The measure^{*}= ln(P^{*})*L*is the expected log-likelihood of a random instantiation of the 1st-order variables in the PBN. The standard moralization method for converting a PBN to an MLN provides another interpretation of^{*}*L*: the measure is closely related to the log-likelihood and to the pseudo log-likelihood of the moralized PBN. For parameter learning, the^{*}*L*-maximizing estimates are the empirical conditional frequencies in the databases. For structure learning, we show that the state of the art learn-and-join method of Khosravi^{*}*et al.*[14] implicitly maximizes the*L*measure. The measure provides a theoretical foundation for this algorithm, while the algorithm's empirical success provides experimental validation for its usefulness.^{*}

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Beyer and Keremoglu, 2009]**

Author(s): Dirk Beyer and M. Erkan Keremoglu.

Title: . CPAchecker: A tool for configurable software verification.

Technical Report: TR 2009-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 2009.

Abstract: Configurable software verification is a recent concept for expressing different program analysis and model checking approaches in one single formalism. This paper presents CPAchecker, a tool and framework that aims at easy integration of new verification components. Every abstract domain, together with the corresponding operations, is required to implement the interface of configurable program analysis (CPA). The main algorithm is configurable to perform a reachability analysis on arbitrary combinations of existing CPAs. The major design goal during the development was to provide a framework for developers that is flexible and easy to extend. We hope that researchers find it convenient and productive to implement new verification ideas and algorithms using this platform and that it advances the field by making it easier to perform practical experiments. The tool is implemented in Java and runs as command-line tool or as Eclipse plug-in. We evaluate the efficiency of our tool on benchmarks from the software model checker BLAST. The first released version of CPAchecker implements CPAs for predicate abstraction, octagon, and explicit-value domains. Binaries and the source code of CPAchecker are publicly available as free software.**[Beyer et al., 2009]**

Author(s): Dirk Beyer, Alessandro Cimatti, Alberto Griggio, M. Erkan Keremoglu, and Roberto Sebastiani.

Title: . Software model checking via large-block encoding.

Technical Report: TR 2009-09, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2009.

Abstract: The construction and analysis of an abstract reachability tree (ART) are the basis for a successful method for software verification. The ART represents unwindings of the control-flow graph of the program. Traditionally, a transition of the ART represents a single block of the program, and therefore, we call this approach single-block encoding (SBE). SBE may result in a huge number of program paths to be explored, which constitutes a fundamental source of inefficiency. We propose a generalization of the approach, in which transitions of the ART represent larger portions of the program; we call this approach large-block encoding (LBE). LBE may reduce the number of paths to be explored up to exponentially. Within this framework, we also investigate symbolic representations: for representing abstract states, in addition to conjunctions as used in SBE, we investigate the use of arbitrary Boolean formulas; for computing abstract-successor states, in addition to Cartesian predicate abstraction as used in SBE, we investigate the use of Boolean predicate abstraction. The new encoding leverages the efficiency of state-of-the-art SMT solvers, which can symbolically compute abstract large-block successors. Our experiments on benchmark C programs show that the large-block encoding outperforms the single-block encoding.**[Blagodurov et al., 2009]**

Author(s): Sergey Blagodurov, Sergey Zhuravlev, Serge Lansiquot, and Alexandra Fedorova.

Title: . Addressing cache contention in multicore processors via scheduling.

Technical Report: TR 2009-16, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, July 2009.

Abstract: Cache contention on multicore processors remains an unsolved problem in existing systems despite significant research efforts dedicated to this problem in the past. Previously proposed solutions required changes to the last-level cache - a key component of the processor. This is a risky undertaking, and so these hardware-based solutions may or may not be adopted in future processors. Our goal is to address cache contention entirely in software, in thread-scheduling algorithms. Unlike existing software solutions, our scheduling algorithms do not require intrusive changes to the virtual memory system and sometimes can be implemented entirely at user level, which makes them an attractive choice for adoption in real systems. We propose several new heuristics allowing the scheduler to estimate mutual performance degradation that a group of threads will experience when sharing a cache. Our heuristics, based on a stack-distance profile of the program (a summary of the program's cache reuse patterns), provide higher accuracy in estimating cache contention than models comparable in computational complexity proposed in prior work. The data to compute these heuristics can be obtained online, via profiling or furnished by a compiler. Further, we develop two new scheduling algorithms based on these heuristics. Our best algorithm performs within 0.5% of the optimal 'oracle' solution and delivers up to 12% workload-average performance improvement compared to the default scheduler.**[Dickie et al., 2009]**

Author(s): Ryan Dickie, Ghassan Hamarneh, and Rafeef Abugharbieh.

Title: . Live-vessel: Interactive vascular image segmentation with simultaneous extraction of optimal medial and boundary paths.

Technical Report: TR 2009-23, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 2009.

Abstract: Vessel analysis is important for a wide range of clinical diagnoses and disease research such as diabetes and malignant brain tumours. Vessel segmentation is a crucial first step in such analysis but is often complicated by structural diversity and pathology. Existing automated techniques have mixed results and difficulties with non-idealities such as imaging artifacts, tiny vessel structures and regions with bifurcations. In this paper we propose Live-Vessel as a novel and intuitive semi-automatic vessel segmentation technique. Our contribution is two-fold. First, we extend the classic Live-Wire technique from user-guided contours to user guided paths along vessel centrelines with automated boundary detection. Live-Vessel achieves this by globally optimizing vessel filter responses over both spatial (x, y) and non-spatial (radius) variables simultaneously. Secondly, our approach incorporates colour gradient and quaternion curvature image information in the segmentation process unlike the majority of current techniques which employ greyscale values or a single colour channel. We validate our method using real medical data from the Digital Retinal Images for Vessel Extraction (DRIVE) database. Quantitative results show that, on average, Live-Vessel resulted in a 8-fold reduction in overall manual segmentation task time, at a 95% accuracy level. We also compare Live-Vessel to state-of-the-art methods and highlight its important advantages in providing the correct topology of the vascular tree hierarchy as well as the associated vessel medial (skeletal) curves and thickness profiles without the need for subsequent error-prone post processing.**[Dyer and Schaefer, 2009]**

Author(s): Ramsay Dyer and Scott Schaefer.

Title: . Circumcentric dual cells with negative area.

Technical Report: TR 2009-06, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 2009.

Abstract: The circumcentric dual complex associated with a triangulation is becoming a structure of interest in discrete differential geometry. It arises naturally in formulations of discrete exterior calculus and, in the two dimensional case that concerns us here, it provides an elegant interpretation of discrete Laplace operators based on the cotangent formula. If the primal triangulation is Delaunay, then the circumcentric dual complex is the Voronoi diagram of the vertices. On the other hand, if a primal edge is not locally Delaunay, the length of the corresponding dual edge will be negative. In many applications this does not present a problem. However in this note we draw attention to the possibility that the dual cell to a primal vertex may have negative area, and we discuss some of the implications. We review the definition of circumcentric dual cells and provide simple explicit constructions of triangle configurations in which a primal vertex has a circumcentric dual cell with negative area.**[Dyer et al., 2009]**

Author(s): Ramsay Dyer, Hao Zhang, and Torsten Möller.

Title: . A survey of delaunay structures for surface representation.

Technical Report: TR 2009-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 2009.

Abstract: The Delaunay triangulation characterizes a natural neighbour relation amongst points distributed in a Euclidean space. In this survey we examine extensions of the Delaunay paradigm that have been used to define triangle meshes for representing smooth surfaces embedded in three dimensional Euclidean space. Progress in this area has stemmed primarily from work done in surface reconstruction and surface meshing. However, our focus is not on the surface reconstruction or meshing algorithms themselves, but rather on the structures that they aim to produce. In particular we concentrate on three distinct Delaunay structures which differ according to the metric involved in their definition: the metric of the ambient Euclidean space; the intrinsic metric of the original surface; or the intrinsic metric of the mesh itself. We study the similarities and distinctions between these objects; their strengths and weaknesses both as theoretical tools and as practical data structures. The topic of this survey lies within the realm of geometry processing, a field of study generally associated with computer graphics. However, it is hoped that this survey will be of interest not just to those who study computer graphics, but to anybody whose research touches on a need to represent non-Euclidean geometry with Euclidean simplices.**[Farahbod et al., 2009]**

Author(s): R. Farahbod, U. Glässer, A. Guitouni, and A. Khalili.

Title: . Dynamic resource configuration management for distributed net-enabled information fusion.

Technical Report: TR 2009-05, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 2009.

Abstract: Surveillance of large volume traffic demands robust and scalable network architectures for distributed information fusion. Operating in an adverse and uncertain environment, the ability to flexibly adapt to dynamic changes in the availability of mobile resources and the services they provide is critical for the success of the surveillance and rescue missions. We present here an extended and enhanced version of the Dynamic Resource Configuration Management system concept, with new features and improved algorithms to better address the adaptability requirements of such a resource network. The DRCM model is described in abstract functional and operational terms based on the Abstract State Machine (ASM) paradigm and the CoreASM open source tool environment for modeling dynamic properties of distributed mobile systems.**[Finkbeiner et al., 2009]**

Author(s): Bernhard Finkbeiner, Alireza Entezari, Torsten Möller, and Dimitri Van De Ville.

Title: . Efficient volume rendering on the body centered cubic lattice using box splines.

Technical Report: TR 2009-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 2009.

Abstract: We demonstrate that non-separable box splines for body centered cubic lattices (BCC) can be adapted to fast evaluation on current graphics hardware. Therefore, we develop the linear and quintic box splines using a piecewise polynomial (pp)-form as opposed to their currently known basis (B)-form. This enables real-time volume rendering of BCC lattices using box splines. Further, we offer a comparison of quintic box splines with the only other real-time evaluation on BCC lattices that is based on separate kernels for interleaved Cartesian cubics (CC). While quintic box splines have clearly improved accuracy, interleaved CC lattices are still faster, since they can take advantage of the dedicated Cartesian circuitry in current graphics hardware. This result remains valid with and without prefiltering using synthetic data as well as experimental data obtained from Optical Projection Tomography. We provide shader code in order to ease the adaptation of box splines for the practitioner.**[Gu and Imani, 2009]**

Author(s): Qian-Ping Gu and Navid Imani.

Title: . Small kernel for planar connected dominating set.

Technical Report: TR 2009-12, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2009.

Abstract: We prove a small linear-size kernel for the connected dominating set problem in planar graphs through data reduction. Our set of rules reduce a planar graph G with n vertices and connected dominating number γ_{c}(*G*) to a kernel of size at most 413 γ_{c}(*G*) in O(n^{5}) time. Our result gives a fixed-parameter algorithm of time (2^{O(√γc(G)}. γ_{c}(*G*) +*n*^{5}) using the standard branch-decomposition based approach.**[Gu and Tamaki, 2009a]**

Author(s): Qian-Ping Gu and Hisao Tamaki.

Title: . Constant-factor approximations of branch-decomposition and largest grid minor of planar graphs in*o(n*time.^{1+ε})

Technical Report: TR 2009-18, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, July 2009.

Abstract: We give constant-factor approximation algorithms for computing the optimal branch-decompositions and largest grid minors of planar graphs. For a planar graph*G*with*n*vertices, let bw(*G*) be the branchwidth of*G*and gm(*G*) the largest integer*g*such that*G*has a*g*×*g*grid as a minor. Let*c*≥*1*be a fixed integer and*α,β*be arbitrary constants satisfying*α*>*c+1.5*and*β*>*2c+1.5*We give an algorithm which constructs in*O(n*time a branch-decomposition of^{1+1/c}log n)*G*with width at most*α*, bw(*G*). We also give an algorithm which constructs a*g*×*g*grid minor of*G*with*g*≥ gm(*G*) / β in*O(n*time. The constants hidden in the Big-Oh notations are proportional to^{1+1/c}log n)*c / α-(c+1.5)*and*c / β-(2c+1.5)*, respectively.**[Gu and Tamaki, 2009b]**

Author(s): Qian-Ping Gu and Hisao Tamaki.

Title: . Efficient reduction of vertex-disjoint menger problem to edge-disjoint menger problem in undirected planar graphs.

Technical Report: TR 2009-11, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2009.

Abstract: We show that the vertex-disjoint Menger problem in undirected planar hypergraphs can be reduced to the edge-disjoint Menger problem in undirected planar graphs. Our reduction is based on the relation between a planar hypergraph and its medial graph introduced in the graph minor theory. The reduction, together with the known result for the edge-disjoint Menger problem, gives a simple linear time algorithm for the vertex-disjoint Menger problem in undirected planar hypergraphs.**[Gu and Tamaki, 2009c]**

Author(s): Qian-Ping Gu and Hisao Tamaki.

Title: . Improved bounds on the planar branchwidth with respect to the largest grid minor size.

Technical Report: TR 2009-17, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, July 2009.

Abstract: For graph*G*, let bw(*G*) denote the branchwidth of*G*and gm(*G*) the largest integer*g*such that*G*contains a*g × g*grid as a minor. We show that bw(*G*) ≤ 3gm(*G*)+2 for every planar graph*G*. This is an improvement over the bound bw(*G*) ≤ 4 gm(*G*) due to Robertson, Seymour and Thomas, as long as gm(*G*) > 2. Our proof is constructive and implies an*O(n*time algorithm for planar graphs to find a^{2}log n)*g*×*g*grid as a minor, where*g*≥ ⌊bw(*G*)/3⌋, giving an almost 3-approximation of the largest grid minor of planar graphs. We also study the tightness of the above bound. A*k*×*h*cylinder, denoted by*C*, is the cartesian product of a cycle on_{k,h}*k*vertices and a path on*h*vertices. We show that bw*(C*and therefore the coefficient in the above upper bound is within a factor of_{2h,h}) = 2gm(C_{2h,h}) = 2h*3/2*from the best possible.**[Gu and Tamaki, 2009d]**

Author(s): Qian-Ping Gu and Hisao Tamaki.

Title: . A radius-based linear-time-constructive upper bound on the branchwidth of planar hypergrpahs.

Technical Report: TR 2009-21, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 2009.

Abstract: Let*k*and*d*be positive integers and*G*a planar hypergprah given with a fixed plane embedding. Suppose each edge of*G*is incident with at most*k*vertices and there is an edge*e_0*of*G*such that for each vertex*v*of*G*, there is an alternating sequence of vertices and faces*v=x_0*,*x_1,..,x_j*,*j ≤ d*, where*x_j*is a vertex incident with*e_0*and each pair of adjacent elements*x_i-1*and*x_i*in the sequence consists of a vertex and a face incident with each other. We show that there is a branch-decomposition of*G*with width at most*k+d*and moreover that, given*G*and*e_0*, such a decomposition can be constructed in*O(n)*time, where*n=|V(G)|+|E(G)|*. This result extends a similar bound of Tamaki (ESA 2003) and implies an*O(n*time^{2})*(2+ε)*-approximation algorithm for branch decompositions of planar graphs.**[Hamarneh, 2009]**

Author(s): Ghassan Hamarneh.

Title: . Multi-label mrf optimization via a least squares*s-t*cut.

Technical Report: TR 2009-08, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2009.

Abstract: We present a novel method to*optimally*reformulate the NP-hard, multi-label Markov random field optimization as a*single*binary (*s-t*) graph cut problem. The optimality of this reformulation is due to the fact that the edge weights in the new*s-t*graph are obtained using a novel least squares solution approximating the constraints of the initial k-label setup. Each non-terminal vertex in the original graph is replaced by*ceil(log*new vertices, and the original graph edges are replaced by new edges connecting the new vertices to each other and to only two, source_{2}(k))*s*and sink*t*, terminal nodes. The minimal*s-t*cut labels each new vertex with a binary (*s*vs*t*) 'Gray' encoding, which is then decoded into a decimal label number that assigns each of the original vertices to one of*k*classes. We analyze the properties of the approximation and present quantitative as well as qualitative image segmentation results, one of the many possible computer vision applications of multi label MRF optimization.**[Hamarneh and Changizi, 2009]**

Author(s): Ghassan Hamarneh and Neda Changizi.

Title: . Barycentric label space.

Technical Report: TR 2009-13, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2009.

Abstract: Multiple neighboring organs or structures captured in medical images are frequently represented by labeling the underlying image domain (e.g. labeling a brain image into white matter, gray matter, cerebrospinal fluid, etc). However, given the different sources of uncertainties in shape boundaries (e.g. tissue heterogeneity, partial volume effect, fuzzy image processing and segmentation results), it is favorable to adopt a probabilistic labeling approach. In many medical image analysis tasks, it is necessary that shapes are properly manipulated, e.g. optimized (i.e. segmentation), de-noised, interpolated, or statistically analyzed (e.g. using principle component analysis). Formulating a representation that not only captures the uncertainty in shapes, but also facilitates algebraic manipulations, is therefore a desirable goal. In this work, we extend the label space multi-shape representation of Malcolm et al. malcolm_miccai2008 to the barycentric label space, which describes a proper invertible mapping between probability vectors and label space. Our method uses the probability vectors as barycentric coefficients describing arbitrary labels in label space and a non-singular matrix inversion to map points in label space to probability vectors. We demonstrate how the conversion errors are eliminated compared to the original label space approach, and demonstrate the effect of these errors in the context of smoothing, linear shape statistics, and uncertainty calculation, on artificial objects and brain image data.**[Hamarneh et al., 2009]**

Author(s): Ghassan Hamarneh, Chris McIntosh, and Mark S. Drew.

Title: . Perception-based visualization of high-dimensional medical images using distance preserving dimensionality reduction.

Technical Report: TR 2009-22, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 2009.

Abstract: A method for visualizing high dimensional medical image data is proposed. The method operates on images in which each pixel contains a high dimensional vector, e.g. a time activity curve (TAC) in a dynamic positron emission tomography (dPET) image, or a tensor, as is the case in diffusion tensor magnetic resonance images (DTMRI). A nonlinear mapping reduces the dimensionality of the data to achieve two goals: Distance preservation and embedding into a perceptual color space. We use multi- dimensional scaling distance preserving mapping to render similar pixels (e.g. DT or TAC pixels) with perceptually similar colors. The 3D CIELAB perceptual color space is adopted as the range of the distance preserving mapping, with a final similarity transform mapping colors to a maximum gamut size. Similarity between pixels is determined analytically as geodesics on the manifold of pixels or approximated using manifold learning techniques. In particular, dissimilarity between DTMRI pixels is evaluated via a Log- Euclidean Riemannian metric respecting the manifold of the rank 3, 2nd order positive semi-definite DTs. Dissimilarity between TACs is approximated via ISOMAP. We demonstrate our approach via artificial high-dimensional data, as well as clinical DTMRI and dPET images. Our results demonstrate the effectiveness of our approach in capturing, in a perceptually meaningful way, important structures in the data.**[Hefeeda and Hsu, 2009]**

Author(s): Mohamed Hefeeda and Cheng-Hsin Hsu.

Title: . Design and evaluation of a testbed for mobile tv networks.

Technical Report: TR 2009-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2009.

Abstract: This paper presents the design of a complete, open-source, testbed for broadcast networks that offer mobile TV services. Although basic architectures and protocols have been developed for such networks, detailed performance tuning and analysis are still needed, especially when these networks scale to serve many diverse TV channels to numerous subscribers. The detailed performance analysis could also motivate designing new protocols and algorithms for enhancing future mobile TV networks. Currently, many researchers evaluate the performance of mobile TV networks using simulation and/or theoretical modeling methods. These methods, while useful for early assessment, typically abstract away many necessary details of actual, fairly complex, networks. Therefore, an open-source platform for evaluating new ideas in a real mobile TV network is needed. This platform is currently not possible with commercial products, because they are sold as black boxes without the source code. In this paper, we summarize our experiences in designing and implementing a testbed for one of the most widely-deployed standards for mobile TV networks: Digital Video integrate off-the-shelf hardware components with carefully designed software modules to realize a scalable testbed that covers almost all aspects of real networks. We use our testbed to empirically analyze the energy saving and channel switching delay performance metrics in mobile TV networks.**[Kameda et al., 2009]**

Author(s): T. Kameda, I. Suzuki, and J. Zhang.

Title: . Finding the minimum-distance schedule for a boundary searcher with a flashlight.

Technical Report: TR 2009-24, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 2009.

Abstract: Consider a dark polygonal region in which intruders move freely, trying to avoid detection. A robot, which is equipped with a flashlight, moves along the polygon boundary to illuminate all intruders. We want to minimize the total distance traveled by the robot until all intruders are detected in the worst case. We present an O(nlog n) time and O(n) space algorithm for optimizing this metric, where n is the number of vertices of the given polygon. This improves upon the best known time and space complexities of O(n^2) and O(n^2), respectively.**[McIntosh and Hamarneh, 2009]**

Author(s): Chris McIntosh and Ghassan Hamarneh.

Title: . Optimal weights in convex functionals: Taking the guesswork out of segmentation.

Technical Report: TR 2009-14, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2009.

Abstract: Energy functional minimization is a popular technique for medical image segmentation. The segmentation must be initialized, weights for competing terms of an energy functional must be tuned, and the functional minimized. There is a substantial amount of guesswork involved. We reduce this guesswork by: first, analytically determining optimal weights; second, minimizing a convex energy functional independent of the initialization; and, third, providing an upper bound on the accuracy achievable by a given, convex energy functional. We demonstrate improved results over state of the art on a set of 470 clinical examples.**[Mokhtarian and Hefeeda, 2009]**

Author(s): Kianoosh Mokhtarian and Mohamed Hefeeda.

Title: . Authentication of scalable video streams with low communication overhead.

Technical Report: TR 2009-19, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, August 2009.

Abstract: The large prevalence of multimedia systems in recent years makes the security of multimedia communications an important and critical issue. We study the problem of securing the delivery of scalable video streams so that receivers can ensure the authenticity of the video content. Our focus is on recent Scalable Video Coding (SVC) techniques, such as H.264/SVC, which can provide three scalability types at the same time: temporal, spatial, and visual quality. This three-dimensional scalability offers a great flexibility that enables customizing video streams for a wide range of heterogeneous receivers and network conditions. This flexibility, however, is not supported by current stream authentication schemes in the literature. We propose an efficient and secure authentication scheme that accounts for the full scalability of video streams, and enables verification of all possible substreams that can be extracted from the original stream. In addition, we propose an algorithm for minimizing the amount of authentication information that need to be attached to streams. The proposed authentication scheme supports end-to-end authentication, in which any third party entity involved in the content delivery process, such as stream adaptation proxies and caches, does not have to understand the authentication mechanism. Our simulation study with real video traces shows that the proposed authentication scheme is robust against packet losses, incurs low computational cost for receivers, has short delay, and adds low communication overhead. Finally, we implement the proposed authentication scheme as an open source library called svcAuth, which can be used as a transparent add-on by any multimedia streaming application.**[Rao et al., 2009]**

Author(s): Josna Rao, Ghassan Hamarneh, and Rafeef Abugharbieh.

Title: . Automatic spatially-adaptive balancing of energy terms for image segmentation.

Technical Report: TR 2009-10, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2009.

Abstract: Image segmentation techniques are predominately based on parameter-laden optimization. The objective function typically involves weights for balancing competing image fidelity and segmentation regularization cost terms. Setting these weights suitably has been a painstaking, empirical process. Even if such ideal weights are found for a novel image, most current approaches fix the weight across the whole image domain, ignoring the spatially-varying properties of object shape and image appearance. We propose a novel technique that autonomously balances these terms in a spatially-adaptive manner through the incorporation of image reliability in a graph-based segmentation framework. We validate on synthetic data achieving a reduction in mean error of 47% (p-value << 0.05) when compared to the best fixed parameter segmentation. We also present results on medical images (including segmentations of the corpus callosum and brain tissue in MRI data) and on natural images.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Alim and Möller, 2008]**

Author(s): Usman R. Alim and Torsten Möller.

Title: . A discrete fourier transform for the hexagonal and body-centered cubic lattices.

Technical Report: TR 2008-14, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2008.

Abstract: We derive a Discrete Fourier Transform for the hexagonal lattice in 2D and the body-centered cubic lattice in 3D. The key idea is to use an axis-aligned window to truncate and periodize the sampled function. This leads to separable transforms that can be efficiently evaluated using the Fast Fourier Transform. We provide experimental results that validate our proposed transforms.**[Alim et al., 2008]**

Author(s): Usman R. Alim, Alireza Entezari, and Torsten Möller.

Title: . The lattice-boltzmann method on optimal sampling lattices.

Technical Report: TR 2008-11, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2008.

Abstract: In this paper, we extend the single relaxation time Lattice-Boltzmann Method (LBM) to the three-dimensional body-centered cubic (BCC) lattice. We show that the D3bQ15 lattice defined by a fifteen neighborhood connectivity of the BCC lattice is not only capable of more accurately discretizing the velocity space of the continuous Boltzmann equation as compared to the D3Q15 Cartesian lattice, it also achieves a comparable spatial discretization with 30% less samples. We validate the accuracy of our proposed lattice by investigating its performance on the 3D lid-driven cavity flow problem. We quantitatively compare our results with published benchmark data and show that the D3bQ15 lattice offers significant cost savings while maintaining a comparable accuracy. We use a dye advection method to visualize the 3D flow fields, taking advantage of the fact that volumetric data on BCC lattices can be rendered faster than Cartesian lattices.**[Bian and Gu, 2008]**

Author(s): Zhengbing Bian and Qian-Ping Gu.

Title: . Computing branch decomposition of large planar graphs.

Technical Report: TR 2008-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 2008.

Abstract: A graph of small branchwidth admits efficient dynamic programming algorithms for many NP-hard problems on the graph. A key step in these algorithms is to find a branch decomposition of small width for the graph. Given a planar graph G of n vertices, an optimal branch decomposition of G can be computed in polynomial time, e.g., by the edge-contraction method in O(n^{3}) time. All known algorithms for planar branch decomposition use Seymour and Thomas procedure which, given an integer β, decides whether G has the branchwidth at least β or not in O(n^{2}) time. Recent studies report efficient implementations of Seymour and Thomas procedure that compute the branchwidth of planar graphs of size up to one hundred thousand edges in a practical time and memory space. Using the efficient implementations as a subroutine, it is reported that the edge-contraction method computes an optimal branch decomposition for planar graphs of size up to several thousands edges in a practical time but it is still time consuming for graphs with larger size. In this paper, we propose divide-and-conquer based heuristics of using Seymour and Thomas procedure to compute optimal branch decompositions of planar graphs. Computational studies show that our heuristics are much faster than the edge-contraction algorithms and can compute an optimal branch decomposition of planar graphs of size up to 50000 edges in a practical time.**[Brantingham et al., 2008a]**

Author(s): Patricia Brantingham, Uwe Glässer, Piper Jackson, and Mona Vajihollahi.

Title: . Collaborative software development for criminology research projects.

Technical Report: TR 2008-05, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 2008.

Abstract: Innovative research in criminology and other social sciences promote computational methods in advanced study of social phenomena. Computational methods offer a*novel way of thinking*by providing new problem solving and analysis techniques never explored before. However, developing computational views in an interdisciplinary research-oriented setting also demands special attention to the unique requirements of the research environment. We focus on these requirements and present a novel methodological framework for developing computational models of social systems in a collaborative research environment.**[Brantingham et al., 2008b]**

Author(s): Patricia Brantingham, Uwe Glässer, Piper Jackson, and Mona Vajihollahi.

Title: . Modeling criminal activity in urban landscapes.

Technical Report: TR 2008-13, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, August 2008.

Abstract: Computational and mathematical methods arguably have an enormous potential for serving practical needs in crime analysis and prevention by offering novel tools for crime investigations and experimental platforms for evidence-based policy making. We present a comprehensive formal framework and tool support for mathematical and computational modeling of criminal behavior to facilitate systematic experimental studies of a wide range of criminal activities in urban environments. The focus is on spatial and temporal aspects of different forms of crime, including opportunistic and serial violent crimes. However, the proposed framework also provides a basis to push beyond conventional empirical research and engage the use of computational thinking and social simulations in the analysis of terrorism and counter-terrorism.**[Cameron et al., 2008]**

Author(s): Robert D. Cameron, Kenneth S. Herdy, and Dan Lin.

Title: . High performance xml parsing using parallel bit stream technology.

Technical Report: TR 2008-10, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2008.

Abstract: Parabix (parallel bit streams for XML) is an open-source XML parser that employs the SIMD (single-instruction multiple- data) capabilities of modern-day commodity processors to deliver dramatic performance improvements over traditional byte-at-a-time parsing technology. Byte-oriented character data is first transformed to a set of 8 parallel bit streams, each stream comprising one bit per character code unit. Character validation, transcoding and lexical item stream formation are all then carried out in parallel using bitwise logic and shifting operations. Byte-at-a-time scanning loops in the parser are replaced by bit scan loops that can advance by as many as 64 positions with a single instruction. A performance study comparing Parabix with the open-source Expat and Xerces parsers is carried out using the PAPI toolkit. Total CPU cycle counts, level 2 data cache misses and branch mispredictions are measured and compared for each parser. The performance of Parabix is further studied with a breakdown of the cycle counts across the core components of the parser. Prospects for further performance improvements are also outlined, with a particular emphasis on leveraging the intraregister parallelism of SIMD processing to enable intrachip parallelism on multicore architectures.**[Colak et al., 2008]**

Author(s): Recep Colak, Flavia Moser, Arash Rafiey, and Martin Ester.

Title: . Densely-connected bi-clustering.

Technical Report: TR 2008-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 01 2008.

Abstract: Large-scale gene expression experiments and interaction networks have become major data sources for discovery in systems biology. In several types of interaction networks, as is widely established, active modules, i.e. functional, simultaneously active groups of genes, are best encoded as highly interconnected (dense) regions that are co-expressed and show significant changes (coactive) in an accompanying set of gene expression experiments. In this technical report we describe a novel frequent pattern mining approach for finding active modules which we model as connected, dense and homogeneous graphs.**[Dyer et al., 2008]**

Author(s): Ramsay Dyer, Hao Zhang, and Torsten Möller.

Title: . Observations on gabriel meshes and delaunay edge flips.

Technical Report: TR 2008-22, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 2008.

Abstract: We undertake a study of the local properties of 2-Gabriel meshes: manifold triangle meshes each of whose faces has an open Euclidean diametric ball that contains no mesh vertices. We show that, under mild constraints on the dihedral angles, such meshes are Delaunay meshes: the open geodesic circumdisk of each face contains no mesh vertices. We detail the distinction between these two mesh structures and reveal why the obstructions which prohibit the existence of Gabriel meshes as homeomorphic representatives of smooth surfaces do not hinder the construction of Delaunay meshes.**[Farahbod and Glässer, 2008]**

Author(s): Roozbeh Farahbod and Uwe Glässer.

Title: . Dynamic resource management for adaptive distributed information fusion in large volume surveillance — phase one.

Technical Report: TR 2008-08, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2008.

Abstract: We propose a highly adaptive and auto-configurable, multi- layer network architecture for distributed information fusion to address large volume surveillance challenges, assuming a multitude of different sensor types on multiple mobile platforms for intelligence, surveillance and reconnaissance. Our focus is on network enabled operations to efficiently manage and improve employment of a set of mobile resources, their information fusion engines and networking capabilities under dynamically changing and essentially unpredictable conditions. A high-level model of the proposed architecture is formally described in abstract functional and operational terms based on the*Abstract State Machine*formalism. This description of the underlying design concepts provides a concise and precise blueprint for reasoning about key system attributes at an intuitive level of understanding.**[Finkbeiner et al., 2008a]**

Author(s): Bernhard Finkbeiner, Usman Raza Alim, Alireza Entezari, and Torsten Möller.

Title: . Hardware-accelerated tomography reconstruction and volume rendering on BCC lattices using box splines.

Technical Report: TR 2008-12, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 6 2008.

Abstract: We present a novel method for rendering volumetric data sampled on the body centered cubic lattice (BCC) using the GPU. Although GPUs are highly optimized for the use of Cartesian lattices we show that rendering volumetric data on a BCC lattice using the GPU is feasible and in the case of C^{2}reconstruction competitive. This allows the tomographic reconstruction of large data sets on the BCC lattice in a practical amount of time. Specifically we are focusing on Emission Tomography and its standard implementation via the Expectation-Maximization (EM) algorithm. We rigorously show, for the first time, the equivalence of this algorithm with an implementation via volume rendering and demonstrate that the equivalence is reached by using a nearest neighbor filter. As an application, we introduce a new GPU-accelerated algorithm for tomographic reconstruction of OPT data (optical projection tomography), making it possible to acquire volumetric data from several projections into a BCC lattice. We demonstrate that due to its optimal sampling properties, tomographic reconstructed volumes using a BCC lattice are more accurate than its Cartesian counterpart at approximately the same computational costs. We therefore pave the way for more wide-spread production of data sampled on the BCC lattice.**[Finkbeiner et al., 2008b]**

Author(s): Bernhard Finkbeiner, Alireza Entezari, Torsten Möller, and Dimitri Van De Ville.

Title: . Hardware-accelerated high-quality box spline reconstruction on the bcc lattice.

Technical Report: TR 2008-19, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 11 2008.

Abstract: We present a novel method for rendering volumetric data sampled on the body centered cubic lattice (BCC) using the GPU. We use box splines as reconstruction kernels that are specially tailored to the BCC lattice. Specifically, we present a fast GPU-based implementation of the linear and quintic box spline for real-time volume rendering. The quintic box spline is a smooth reconstruction kernel with C^{2}continuity, but it is not interpolating. Therefore, we propose prefiltering as an initialization step to enforce the interpolation constraint and to drastically improve the reconstruction quality. Our extensive experimental results clearly demonstrate that box splines for hardware-accelerated volume rendering are an alternative with strongly improved image quality compared to similar methods on the Cartesian and BCC lattice and similar frame rates.**[Glässer and Vajihollahi, 2008]**

Author(s): Uwe Glässer and Mona Vajihollahi.

Title: . Identity management architecture.

Technical Report: TR 2008-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2008.

Abstract: Identity Management plays a crucial role in many application contexts, including e-governments, e-commerce, business intelligence, investigation, and homeland security. The variety of approaches to and techniques for identity management, while addressing some of the challenges, have introduced new problems especially concerning interoperability and privacy. We focus here on two fundamental issues within this context: (1) a firm unifying semantic foundation for the systematic study of identity management and improved accuracy in reasoning about key properties in identity management system design, and (2) the practical relevance of developing a distributed approach to identity management (as opposed to a centralized one). Our proposed mathematical framework is built upon essential requirements of an identity management system (such as privacy, user-control, and minimality), and serves as a starting point for bringing together different approaches in a systematic fashion in order to develop a distributed architecture for identity management.**[Haffari and Sarkar, 2008]**

Author(s): Gholamreza Haffari and Anoop Sarkar.

Title: . Homotopy-based semi-supervised hidden markov models and probabilistic context free grammars.

Technical Report: TR 2008-15, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2008.

Abstract: This paper explores the use of the homotopy method for training semi-supervised Hidden Markov Model (HMM) used for sequence labeling and semi-supervised Probabilisitc Context Free Grammars (PCFGs). We provide novel polynomial-time algorithms for HMMs and PCFGs to trace the local maximum of the likelihood function from full weight on the labeled data to full weight on the unlabeled data. We present a detailed experimental analysis of different techniques for choosing the best balance between labeled and unlabeled data based on the characteristics observed along this path for HMMs. Furthermore, experimental results on the field segmentation task in information extraction show that the Homotopy-based trained HMM: (1) significantly outperforms EM-based traind one, and (2) provides a more accurate alternative to the use of held-out data to pick the best balance for combining labeled and unlabeled data.**[Jiang et al., 2008]**

Author(s): B. Jiang, J. Pei, X. Lin, D. W-L Cheung, and J. Han.

Title: . Mining preferences from superior and inferior examples.

Technical Report: TR 2008-09, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2008.

Abstract: Mining user preferences plays a critical role in many important applications such as customer relationship management (CRM), product and service recommendation, and marketing campaigns. In this paper, we identify an interesting and practical problem of mining user preferences: in a multidimensional space where the user preferences on some categorical attributes are unknown, from some superior and inferior examples provided by a user, can we learn about the user's preferences on those categorical attributes? We model the problem systematically and show that mining user preferences from superior and inferior examples is challenging. Although the problem has great potential in practice, to the best of our knowledge, it has not been explored systematically before. As the first attempt to tackle the problem, we propose a greedy method and show that our method is practical using real data sets and synthetic data sets.**[Marzban et al., 2008]**

Author(s): Marjan Marzban, Qian-Ping Gu, and Xiaohua Jia.

Title: . Computational study on dominating set problem of planar graphs.

Technical Report: TR 2008-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 2008.

Abstract: Recently, there have been significant theoretical progresses towards fixed-parameter algorithms for the DOMINATING SET problem of planar graphs. It is known that the problem of a planar graph with n vertices and dominating number k can be solved in O(c^{ sqrt k}n) time, where c is some constant. However there has been no computational study report on the practical performances of the O(c^{ sqrt k}n) time algorithms. In this paper, we report the computational results of Fomin and Thilikos algorithm which uses the branch-decomposition based approach and achieves the smallest constant c=2^{15.13}among these algorithms. The computational results show that the algorithm can solve the DOMINATING SET problem of large planar graphs in a practical time for the class of graphs with small branchwidth. For the class of graphs with large branchwidth, the size of instances that can be solved by the algorithm in a practical time is limited to a few hundreds edges. The practical performances of the algorithm coincide with the theoretical analysis of the algorithm. We also propose a new heuristic for the DOMINATING SET problem. The heuristic uses the branch-decomposition based approach but computes an approximation of the optimal solution. The computational results show that the heuristic can give better solutions than previous known heuristics in a practical time. The heuristic can be used for the planar graphs with large branchwidth for which it is not possible to find an optimal solution in a practical time. The results of this paper suggest that the branch-decomposition based algorithms may be practical for some applications on planar graphs.**[Pei et al., 2008]**

Author(s): Jian Pei, Yufei Tao, Jiexing Li, and Xiaokui Xiao.

Title: . Butterfly: Privacy preserving publishing on multiple quasi-identifiers.

Technical Report: TR 2008-18, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 2008.

Abstract: Recently, privacy preserving data publishing has attracted significant interest in research. Most of the existing studies focus on only the situations where the data in question is published using one quasi-identifier. However, in a few important applications, a practical demand is to publish a data set on multiple quasi-identifiers for multiple users simultaneously, which poses several challenges. How can we generate one anonymized version of the data so that the privacy preservation requirement like k-anonymity is satisfied for all users? Moreover, how can we reduce the information loss as much as possible while the privacy preservation requirements are met? In this paper, we identify and tackle the novel problem of privacy preserving publishing on multiple quasi-identifiers. A näive solution of respectively publishing multiple versions for different quasi-identifiers unfortunately suffers from the possibility that those releases can be joined to intrude the privacy. Interestingly, we show that it is possible to generate only one anonymized table to satisfy the k-anonymity on all quasi-identifiers for all users without significant information loss. We systematically develop an effective method for privacy preserving publishing for multiple users, and report an empirical study using real data to verify the feasibility and the effectiveness of our method.**[Schulte et al., 2008]**

Author(s): Oliver Schulte, Hassan Khosravi, Flavia Moser, and Martin Ester.

Title: . Join bayes nets: A new type of bayes net for relational data.

Technical Report: TR 2008-17, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, October 2008.

Abstract: Many databases store data in relational format, with different of entities and information about links between the entities. The field of statistical-relational learning has developed a number of new statistical models for such data, e.g. Probabilistic Relational Models and Markov Logic Networks. Instead of introducing a new model class, we propose using a standard model class in a new way: Join Bayes nets contain nodes that correspond to the descriptive attributes of the database tables, plus Boolean relationship nodes that indicate the presence of a link. As Join Bayes nets are just a special type of Bayes net, their semantics is standard (edges denote direct associations, d-separation implies probabilistic independence etc.), and Bayes net inference algorithms can be used 'as is' to answer probabilistic queries involving relations. We present a dynamic algorithm for estimating the parameters of a Join Bayes net and discuss how Join Bayes Nets model various well-known statistical-relational phenomena like autocorrelation, aggregation and recursion.**[Taboada et al., 2008]**

Author(s): Maite Taboada, Kimberly Voll, and Julian Brooke.

Title: . Extracting sentiment as a function of discourse structure and topicality.

Technical Report: TR 2008-20, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 2008.

Abstract: We present an approach to extracting sentiment from texts that makes use of contextual information. Using two different approaches, we extract the most relevant sentences of a text, and calculate semantic orientation weighing those more heavily. The first approach makes use of discourse structure via Rhetorical Structure Theory, and extracts nuclei as the relevant parts; the second approach uses a topic classifier built using support vector machines, which extracts topic sentences from texts. The use of weights on relevant sentences shows an improvement over word-based methods that consider the entire text equally. In the paper, we also describe an enhancement of our previous word-based methods in the treatment of intensifiers and negation, and the addition of other parts of speech beyond adjectives.**[Tsoukalas et al., 2008]**

Author(s): Kathleen Tsoukalas, Bin Zhou, Jian Pei, and Davor Cubranic.

Title: . Pleds: A personalized entity detection system based on web log mining techniques.

Technical Report: TR 2008-06, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, Jul 2008. (Original Feb/2008; minor revision Jul/2008).

Abstract: With the expansion of the Blogosphere, many specialized blogs have become available that bring very technical subject matter to readers with non-technical backgrounds. While the theme of these blogs may be of interest to these readers, the posts themselves may contain terms that non-experts in that field may be unfamiliar with and may wish to know more about. We developed PLEDS, a PersonaLized Entity Detection System which identifies interesting entities and provides related information for individual users by mining web logs and query logs. The experimental results of a systemic user study shows that with PLEDS's aid, users can experience the benefits of an enriched internet surfing experience. PLEDS outperforms some other related systems proposed in the literature regarding to the usefulness.**[Wang and Mori, 2008]**

Author(s): Yang Wang and Greg Mori.

Title: . Max-margin hidden conditional random fields for human action recognition.

Technical Report: TR 2008-21, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 12 2008.

Abstract: We present a new method for classification with structured latent variables. Our model is formulated using the max-margin formalism in the discriminative learning literature. We propose an efficient learning algorithm based on the cutting plane method and decomposed dual optimization. We apply our model to the problem of recognizing human actions from video sequences, where we model a human action as a global root template and a constellation of several 'parts'. We show that our model outperforms another similar method that uses hidden conditional random fields, and is comparable to other state-of-the-art approaches. More importantly, our proposed work is quite general and can potentially be applied in a wide variety of vision problems that involve various complex, interdependent latent structures.**[Ward and Hamarneh, 2008]**

Author(s): Aaron D. Ward and Ghassan Hamarneh.

Title: . GMAT: The Groupwise Medial Axis Transform for fuzzy skeletonization and intelligent pruning.

Technical Report: TR 2008-07, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, Mar 2008.

Abstract: There is a frequent need to compute medial shape representations of each of a group of structures, e.g. for use in a medical study of anatomical shapes. We present a novel approach to skeletonization that leverages information provided from such a group. We augment the traditional medial axis transform with an additional coordinate stored at each medial locus, indicating the confidence that the branch on which that locus lies represents signal and not noise. This confidence is calculated based on the support given to that branch by corresponding branches in other skeletons in the group. We establish the aforementioned correspondence by a set of bipartite graph matchings using the Hungarian algorithm, and compute branch support based on similarity of computed geometric and topological features at each branch. This groupwise skeletonization approach supports an intelligent pruning algorithm, which we show to operate quickly and provide pruning in an intuitive manner. We show that the method is amenable to automatic detection of skeletal configurations with one, or more than one, topological class of skeletons. This is useful to medical studies which often involve patient groups whose structures may differ topologically.**[Zhou et al., 2008]**

Author(s): Bin Zhou, Jian Pei, and Wo-Shun Luk.

Title: . A brief survey on anonymization techniques for privacy preserving publishing of social network data.

Technical Report: TR 2008-16, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2008.

Abstract: Nowadays, partly driven by many Web 2.0 applications, more and more social network data has been made publicly available and analyzed in one way or another. Privacy preserving publishing of social network data becomes a more and more important concern. In this paper, we present a brief yet systematic review of the existing anonymization techniques for privacy preserving publishing of social network data. We identify the new challenges in privacy preserving publishing of social network data comparing to the extensively studied relational case, and examine the possible problem formulation in three important dimensions: privacy, background knowledge, and data utility. We survey the anonymization methods for privacy preservation in two categories: clustering-based approaches and graph modification approaches.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Atkins et al., 2007]**

Author(s): M.S. Atkins, A Moise, and R. Rohling.

Title: . Understanding search and errors in comparative visual search: Insights from eye-gaze studies.

Technical Report: TR 2007-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, Jan. 2007.

Abstract: We demonstrate that the search for a target during a comparative visual search task of two side-by-side images is undertaken in two stages. First a regular scan path search phase for a likely target is made, followed by a confirmation phase of several fixations on the target in each side-by-side image. The horizontal saccades occurring during the confirmation phase and fixations between left and right side images are necessary because of the limitations of the visual working memory; the subjects prefer to make multiple saccades rather than perform cognitive effort to remember features from one side image to another. Errors can be classified into different categories based on cumulative fixation durations on the targets in both left and right images. Analysis of our false negative errors showed that recognition errors arose when less than three fixations and horizontal saccades were made on the missed targets in the right and left images. For cumulative fixations >1200 msecs on missed targets in both images the errors can be considered as decision errors rather than recognition errors. These results show that eyegaze tracking during a comparative visual search task yields readily observed insights for search strategy and decision-making.**[Bian et al., 2007]**

Author(s): Zhengbing Bian, Qian-Ping Gu, Marjan Marzban, Hisao Tamaki, and Yumi Yoshitake.

Title: . Empirical study on branchwidth and branch decomposition of planar graphs.

Technical Report: TR 2007-22, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2007.

Abstract: We propose efficient implementations of Seymour and Thomas algorithm which, given a planar graph and an integer β, decides whether the graph has the branchwidth at least β. The computational results of our implementations show that the branchwidth of a planar graph can be computed in a practical time and memory space for some instances of size about one hundred thousand edges. Previous studies report that a straightforward implementation of the algorithm is memory consuming, which could be a bottleneck for solving instances with more than a few thousands edges. Our results suggest that with efficient implementations, the memory space required by the algorithm may not be a bottleneck in practice. Applying our implementations, an optimal branch decomposition of a planar graph of practical size can be computed in a reasonable time. Branch-decomposition based algorithms have been explored as an approach for solving many NP-hard problems on graphs. The results of this paper suggest that the approach could be practical.**[Cameron, 2007]**

Author(s): Robert D. Cameron.

Title: . u8u16 - a high-speed utf-8 to utf-16 transcoder using parallel bit streams.

Technical Report: TR 2007-18, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2007.

Abstract: The u8u16 program is a high-performance UTF-8 to UTF-16 transcoder using a SIMD programming technique based on parallel bit streams. In essence, character data is processed in blocks of size |BLOCKSIZE|, where |BLOCKSIZE| is the number of bits that are held in a SIMD register. A set of parallel registers each contain one bit per character code unit for |BLOCKSIZE| consecutive code unit positions. For example, in UTF-8 processing, eight parallel registers are used for the eight individual bits of the UTF-8 code units. The u8u16 transcoder written in this way takes advantage the pipelining and SIMD capabilities of modern processor architectures to achieve substantially better performance than byte-at-a-time conversion.**[Dale et al., 2007]**

Author(s): Cameron Dale, Jiangchuan Liu, Joseph Peters, and Bo Li.

Title: . On the evolution of bittorrent network topologies: Technical report.

Technical Report: TR 2007-24, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 2007.

Abstract: This technical report expands on some ideas from the previous experimental study that closely examined the underlying topologies of BitTorrent swarms. We develop a theory for creating small-world graphs in the network of peers connected to each other in a BitTorrent swarm, using only very simple tracker modifications. Through simulation and experimentation, we verify the presence of clustering in new experiments using this modified tracker implementation, while also ensuring that the characteristic path length of the graphs do not unduly increase. We find that all 4 of the BitTorrent graphs we consider now exhibit the small-world characteristics that were not found in the previous work.**[Doucette and Fedorova, 2007]**

Author(s): Daniel Doucette and Alexandra Fedorova.

Title: . Base vectors: A potential technique for microarchitectural classification of applications.

Technical Report: TR 2007-11, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2007.

Abstract: In this paper we examine the use of base vector applications as a tool for classifying an application's usage of a processor's resources. We define a series of base vector applications, simple applications designed to place directed stress on a single processor resource. By co-scheduling base vector applications with a target application on a CMT multithreaded processor, we can determine an application's sensitivity to a particular processor resource, and application's intensity with respect to a particular processor resource. An application's sensitivities and intensities for a set of processor resources comprise that application's sensitivity and intensity vectors. We envision that sensitivity and intensity vectors could be used for (a) understanding microarchitectural properties of the application; (b) forecasting an optimal co-schedule for an application on a multithreaded processor; (c) evaluating suitability of a particular architecture or microarchitecture for the workload without executing the workload on that architecture. We describe the methods for constructing base vector applications and for obtaining applications' sensitivity and intensity vectors. Using UltraSPARC T1 (Niagara) as the experimental platform, we validate base vectors as a method for classifying applications by showing that sensitivity and intensity vectors can be used to successfully predict the optimal co-schedule for an application. We outline the roadmap for future work, including evaluating the use of base vectors for crossarchitectural performance forecasts.**[Drew and Bergner, 2007]**

Author(s): Mark S Drew and Steven Bergner.

Title: . Spatio-chromatic decorrelation for color image compression.

Technical Report: TR 2007-09, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2007.

Abstract: We investigate the implications of a unified spatio-chromatic basis for image compression and reconstruction. Different adaptive and general methods (PCA, ICA, and DCT) are applied to generate bases. While typically such bases with spatial extent are investigated in terms of their correspondence to human visual perception, we are interested in their applicability to multimedia encoding. The performance of the extracted spatio-chromatic spatial patch bases is evaluated in terms of quality of reconstruction with respect to their potential for data compression. Since independent component analysis is not as widely used as it should be, compared to the other decorrelation methods applied here in a new domain, we also provide a review of ICA. The results discussed here are intended to provide another path towards perceptually-based encoding of visual data. This leads to a deeper understanding of the role played by chromatic features in data reduction. [Please view in color.]**[Dyer et al., 2007a]**

Author(s): Ramsay Dyer, Hao Zhang, and Torsten Möller.

Title: . An investigation of the spectral robustness of mesh laplacians.

Technical Report: TR 2007-17, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2007.

Abstract: This paper presents an empirical study of the spectral properties of mesh Laplacian operators and how they are aected by changes in the the connectivity or number of vertices of the mesh. Specically, we investigate to what extent the eigenvalues of an operator dier when evaluated on dierent meshes representing the same object. Our experiments compare two dierent types of operators, one being purely connectivity based and the other based on discrete approximations to the Laplace-Beltrami operator. The spectrum of the latter operators generally displayed more robustness.**[Dyer et al., 2007b]**

Author(s): Ramsay Dyer, Hao Zhang, and Torsten Möller.

Title: . On Voronoi-Delaunay duality and delaunay meshes.

Technical Report: TR 2007-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2007.

Abstract: In this paper, we are concerned with Delaunay triangulations of the vertex set of a piecewise flat (pwf) surface. We first propose the notion of well-formed Voronoi diagrams and establish a precise dual relationship between them and proper Delaunay triangulations on pwf surfaces. Then we provide an algorithm which, given any input manifold triangle mesh, constructs a em Delaunay mesh: a manifold triangle mesh whose edges form an intrinsic Delaunay triangulation of its vertex set. Rather than relying on a geodesic Delaunay triangulation on the input mesh, our algorithm swaps the physical mesh edges based on the locally Delaunay criterion. We prove that when a physical edge that is not locally Delaunay is swapped, the surface area of the mesh is reduced. In order to ensure a proper Delaunay triangulation, some new vertices may need to be introduced, leading to a refinement scheme, and we detail the cases involved.**[Gu et al., 2007]**

Author(s): Baohua Gu, Veronica Dahl, and Fred Popowich.

Title: . Recognizing biomedical named entities in the absence of human annotated corpora.

Technical Report: TR 2007-14, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 2007.

Abstract: Biomedical Named Entity Recognition is an important task in biomedical text min-ing. Currently the dominant approach is supervised learning, which requires a suffi-ciently large human annotated corpus for training. In this paper, we propose a novel approach aiming at minimizing the annota-tion requirement. The idea is to use a dic-tionary, which is essentially a list of entity names compiled by domain experts, and sometimes more readily available than do-main experts themselves. Given an unla-belled training corpus, we label the sen-tences by a simple dictionary lookup, which provides us with highly reliable but incomplete positive data. We then run a SVM-based self-training process in the spirit of semi-supervised learning, to itera-tively learn from the positive and unlabeled data to build a reliable classifier. Our evaluation on the BioNLP-2004 Shared Task data sets suggests that the proposed method can be a feasible alternative to tra-ditional approaches when human annota-tion is not available.**[Haffari and Sarkar, 2007]**

Author(s): Gholamreza Haffari and Anoop Sarkar.

Title: . Analysis of semi-supervised learning with the yarowsky algorithm.

Technical Report: TR 2007-07, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 2007.

Abstract: The Yarowsky algorithm is a rule-based semi-supervised learning algorithm that has been successfully applied to some problems in computational linguistics. The algorithm was not mathematically well understood until (Abney 2004) which analyzed some specific variants of the algorithm, and also proposed some new algorithms for bootstrapping. In this paper, we extend Abney's work and show that some of his proposed algorithms actually optimize (an upper-bound on) an objective function based on a new definition of cross-entropy which is based on a particular instantiation of the Bregman distance between probability distributions. Moreover, we suggest some new algorithms for rule-based semi-supervised learning and show connections with harmonic functions, minimum multi-way cuts in graph-based semi-supervised learning, and the exponential family of probability distributions.**[Hefeeda, 2007]**

Author(s): Mohamed Hefeeda.

Title: . Forest fire modeling and early detection using wireless sensor networks.

Technical Report: TR 2007-08, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2007.

Abstract: We design a wireless sensor network for early detection of forest fires. Our design is based on the Fire Weather Index (FWI) System developed by the Canadian Forest Service. Our experimental results show the efficiency and accuracy of the proposed system.**[Hefeeda and Ahmadi, 2007]**

Author(s): Mohamed Hefeeda and Hossein Ahmadi.

Title: . Network connectivity under probabilistic communication models in wireless sensor networks.

Technical Report: TR 2007-10, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2007.

Abstract: Several previous works have experimentally shown that communication ranges of sensors are not regular disks. Rather, they follow probabilistic models. Yet, many current connectivity maintenance protocols assume the disk communication model for convenience and ease of analysis. In addition, current protocols do not provide any assessment of the quality of communication between nodes. In this paper, we take a first step in designing connectivity maintenance protocols for more realistic communication models. We propose a distributed connectivity maintenance protocol that explicitly accounts for the probabilistic nature of communication links and achieves a given target communication quality between nodes. Our protocol is simple to implement, and we demonstrate its robustness against random node failures, inaccuracy of node locations, and imperfect time synchronization of node clocks using extensive simulations. We compare our protocol against others in the literature and show that it activates fewer number of nodes, consumes much less energy, and significantly prolongs the network lifetime. We extend our connectivity protocol to provide probabilistic coverage as well, where the sensing ranges of sensors follow probabilistic models.**[Hefeeda and Noorizadeh, 2007]**

Author(s): Mohamed Hefeeda and Behrooz Noorizadeh.

Title: . On the benefits of cooperative proxy caching for peer-to-peer traffic.

Technical Report: TR 2006-20, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, July 2007.

Abstract: Peer-to-peer (P2P) systems currently generate a major fraction of the total Internet traffic, accounting for as much as 60 V70% of the traffic in some ISPs. This paper analyzes the potential of cooperative proxy caching for P2P traffic as a means to ease the burden imposed by P2P traffic on ISPs. In particular, we propose two models for cooperative caching of P2P traffic. The first model enables cooperation among caches that belong to different autonomous systems (ASes), while the second considers cooperation among caches deployed within the same AS. We analyze the potential gain of cooperative caching in these two models. To perform this analysis, we conduct an eightmonth measurement study on a popular P2P system to collect actual traffic traces for multiple caches. Then, we perform an extensive trace-based simulation study to analyze different angles of cooperative caching schemes. Our results demonstrate that: (i) significant improvement in byte hit rate can be achieved using cooperative caching, (ii) simple object replacement policies are sufficient to achieve that gain, and (iii) the overhead imposed by cooperative caching is negligible. In addition, we develop a simple analytic model to assess the gain from cooperative caching in different settings. The model accounts for number of caches, salient P2P traffic features, and network characteristics. Our model confirms that substantial gains from cooperative caching are attainable under wide ranges of traffic and network characteristics.**[Hsu and Hefeeda, 2007a]**

Author(s): Cheng-Hsin Hsu and Mohamed Hefeeda.

Title: . Optimal coding of multi-layer and multi-version video streams.

Technical Report: TR 2007-13, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2007.

Abstract: Recent video coders, such as H.264/SVC, can encode a video stream into multiple layers, each with a different rate. Moreover, each layer can either be coarse-grained scalable (CGS) or fine-grained scalable (FGS). FGS layers support wider ranges of client bandwidth than CGS layers, but suffer from higher coding inefficiency. Currently there are no systematic ways in the literature to determine the optimal stream structure that renders the best average quality for all clients. In this paper, we formulate an optimization problem to determine the optimal rate and encoding granularity (CGS or FGS) of each layer in a scalable video stream that maximizes a system-defined utility function for a given client distribution. We design an efficient, yet optimal, algorithm to solve this optimization problem. Our algorithm is general in the sense that it can employ arbitrary utility functions for clients. We implement our algorithm and verify its optimality. We show how various structuring of scalable video streams affect individual client utilities. We compare our algorithm against a heuristic algorithm that has been used before in the literature, and we show that our algorithm outperforms the other one in all cases.**[Hsu and Hefeeda, 2007b]**

Author(s): Cheng-Hsin Hsu and Mohamed Hefeeda.

Title: . Partitioning of multiple fine-grained scalable video sequences concurrently streamed to heterogeneous clients.

Technical Report: TR 2007-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2007.

Abstract: Fine-grained scalable (FGS) coding of video streams has been proposed in the literature to accommodate client heterogeneity . FGS streams are composed of two layers: base layer, which provides basic quality, and a single enhancement layer that adds incremental quality refinements proportional to number of bits received. The base layer uses nonscalable coding which is more efficient in terms of compression ratio than scalable coding used in the enhancement layer. Thus for coding efficiency larger base layers are desired. Larger base layers, however, disqualify more clients from getting the stream. In this paper, we experimentally analyze this coding efficiency gap using diverse video sequences. For FGS sequences, we show that this gap is a non-increasing function of the base layer rate. We then formulate an optimization problem to determine the base layer rate of a single sequence to maximize the average quality for a given client bandwidth distribution. We design an optimal and efficient algorithm (called FGSOPT) to solve this problem. We extend our formulation to the multiple-sequence case, in which a bandwidth-limited server concurrently streams multiple FGS sequences to diverse sets of clients. We prove that this problem is NP-Complete. We design a branch-and-bound algorithm (called MFGSOPT) to compute the optimal solution. MFGSOPT runs fast for many typical cases because it intelligently cuts the search space. In the worst case, however, it has exponential time complexity. We also propose a heuristic algorithm (called MFGS) to solve the multiple-sequence problem. We experimentally show that MFGS produces near-optimal results and it scales to large problems: it terminates in less than 0.5 second for problems with more than 30 sequences. Therefore, MFGS can be used in dynamic systems, where the server periodically adjusts the structure of FGS streams to suit current client distributions.**[Hua et al., 2007]**

Author(s): Ming Hua, Jian Pei, Wenjie Zhang, and Xuemin Lin.

Title: . Efficiently answering probabilistic threshold top-k queries on uncertain data.

Technical Report: TR 2007-26, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2007.

Abstract: Uncertain data is inherent in a few important applications such as environmental surveillance and mobile object tracking. Top-kqueries (or as known as ranking queries) are often natural and useful in analyzing uncertain data in those applications. In this paper, we study the problem of answering probabilistic threshold top-k queries on uncertain data, which computes uncertain records taking a probability of at least p to be in the top-k list where p is a user specified probability threshold. We present an efficient exact algorithm and a fast sampling algorithm. An empirical study using real and synthetic data sets verifies the effectiveness of probabilistic threshold top-k queries and the efficiency of our methods.**[Kolokolova et al., 2007]**

Author(s): Antonina Kolokolova, Yongmei Liu, David G. Mitchell, and Eugenia Ternovska.

Title: . Model expansion and the expressiveness of FO(ID) and other logics.

Technical Report: TR 2007-29, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 2007.

Abstract: Model expansion is the task of determining, given a formula and a structure for a part of the vocabulary of the formula, whether there is an expansion of the structure that satisfies the formula. Recent development of a problem-solving paradigm based on model expansion by (Mitchell & Ternovska, 2005; Mitchell, Ternovska, Hach & Mohebali, 2006) posed the question of complexity of this problem for logics used in the paradigm. We discuss the complexity of the model expansion problem for a number of logics, alongside that of satisfiability and model checking. As the task is equivalent to witnessing leading existential second-order quantifiers in a model checking setting, the paper is in large part a survey of this area together with some new results. In particular, we describe the combined and data complexity of model expansion for FO(ID) (Denecker & Ternovska, 2008), as well as guarded and k-guarded logics of (Andreka, van Benthem & Nemeti, 1998).**[Liu and Sarkar, 2007]**

Author(s): Yudong Liu and Anoop Sarkar.

Title: . Experimental evaluation of LTAG-based features for semantic role labeling.

Technical Report: TR 2007-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2007.

Abstract: In this technical report, we proposes the use of Lexicalized Tree-Adjoining Grammar (LTAG) formalism as an important additional source of features for the Semantic Role Labeling (SRL) task.Using a set of one-vs-all Support Vector Machines (SVMs), we evaluate these LTAG-based features. Our experiments show that LTAG-based features can improve SRL accuracy significantly. When compared with the best known set of features that are used in state of the art SRL systems we obtain an improvement in F-score from 82.34% to 85.25%.**[Mitchell and Ternovska, 2007]**

Author(s): David Mitchell and Eugenia Ternovska.

Title: . On the expressiveness of essence.

Technical Report: TR 2007-19, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2007.

Abstract: We should get a reasonable approximation of Sigma_{p}^{i}in html with: Σ_{p}^{i}, which I have put in below (but not tested yet). Development of languages for specifying or modelling problems is an important direction in constraint modelling. To provide greater abstraction and modelling convenience, these languages are becoming more syntactically rich, leading to a variety of questions about their expressive power. In this paper, we consider the expressiveness of Essence, a specification language with a rich variety of syntactic features. We identify natural fragments of Essence that capture the complexity classes P, NP, all levels Σ_{p}^{i}of the polynomial hierarchy, and all levels k-NEXP of the nondeterministic exponential-time hierarchy. The union of these classes is the very large complexity class ELEMENTARY. One goal is to begin to understand which features play a role in the high expressive power of the language and which are purely features of convenience. We also discuss the expressive limit of Essence, a notion of capturing NP-search which is slightly different than that of capturing NP decision problems, and the formalization of arithmetic in Essence and related languages. Our study of arithmetic raises some, perhaps surprising, issues. Our study is an application of descriptive complexity theory, and illustrates the value of taking a logic-based view of modelling and specification languages.**[Sahinalp et al., 2007]**

Author(s): S. Cenk Sahinalp, Peter J. Unrau, Cagri Aksay, Emre Karakoc, and Chi Kin Ho.

Title: . ncrna discovery and functional identification via sequence motifs.

Technical Report: TR 2007-06, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 2007.

Abstract: Non-coding RNAs play regulatory roles in gene expression via establishing stable joint structures with target mRNAs through complementary sequence motifs. Sequence motifs are also important determinants of the structure of ncRNAs. Here we introduce two computational tools that both exploit differential distributions of short sequence motifs in ncRNAs for the purpose of identifying their loci and functionality. (1) smyRNA is an ncRNA search tool which is based on identification of structural motifs on a genome sequence. (2) pRuNA is the basis of an index structure which can search a collection of ncRNAs to determine those that can establish stable interactions with a query mRNA. pRuNA uses differential distribution of functional motifs for the purpose of eliminating a significant majority of the ncRNAs while retaining the actual regulatory RNAs of a query mRNA. The retained ncRNAs are then verified by inteRNA, a fast technique for determining the topology of the joint secondary structure between an ncRNA and its potential target. In cross-training experiments on the e.coli genome, smyRNA identified the majority of the test ncRNAs while yielding no false positives. It also identified a number of previously unknown ncRNA candidates which are likely to be functional. We also tested our index structure on the e.coli ncRNAs: for a typical mRNA query, pRuNA eliminated 91% of the index while retaining the known regulatory RNA. On the complete Rfam (seed) ncRNA collection, the pruning efficiency of pRuNA was 86%. In both cases, the interaction between each query mRNA and its known regulatory RNA was verified by inteRNA.**[Schulte et al., 2007]**

Author(s): Oliver Schulte, Flavia Moser, Martin Ester, and Zhiyong Lu.

Title: . Defining association rules in the relational calculus.

Technical Report: TR 2007-23, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2007.

Abstract: One of the most utilized data mining tasks is the search for association rules. Association rules represent probabilistically significant relationships between items in transactions. We extend the concept of association rule to represent a much broader class of associations, which we refer to as*entity-relationship rules.*Semantically, entity-relationship rules express associations between properties of related objects. Syntactically, these rules are based on a broad subclass of safe domain relational calculus queries. We propose a new definition of support and confidence for entity-relationship rules and for the frequency of entity-relationship queries. We prove that the definition of frequency satisfies standard probability axioms and the Apriori property.**[She et al., 2007]**

Author(s): Rong She, Ke Wang, Ada Waichee Fu, and Yabo Xu.

Title: . Computing join aggregates over private tables.

Technical Report: TR 2007-12, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, Jan 2007.

Abstract: We propose a privacy-preserving protocol for computing aggregation queries over the join of private tables. In this problem, several parties wish to share aggregated information over the join of their tables, but want to conceal the details that generate such information. The join operation presents a challenge to privacy preservation because it requires matching individual records from private tables. We solve this problem by a novel private sketching protocol that securely exchanges some randomized summary information about private tables. This protocol (1) protects individual records as well as the distribution of private values, (2) works on many general forms of aggregation functions, (3) handles group-by aggregates, and (4) handles roll-up/drill-down operations. Previous works have not provided this level of privacy for such queries.**[Ternovska and Mitchell, 2007]**

Author(s): Eugenia Ternovska and David G. Mitchell.

Title: . Declarative programming of search problems with built-in arithmetic.

Technical Report: TR 2007-28, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 2007.

Abstract: We address the problem of extending a declarative programming framework based on the task of model expansion (MX), with built-in structured domains, arithmetic in particular. The challenge is to support natural modelling, by allowing quantification over the infinite domain, while still restricting expressive power, for example to NP. We present an extension of the MX-based framework to arithmetic structures, which may include multi-set operations (aggregates) as well as any polytime arithmetic relations. The idea is that users would use arithmetic as if it was 'built-in', without having to axiomatize arithmetic operations, encode numbers in binary or n-ary, or introduce the corresponding tables as part of each instance; however simple syntactic conditions would have to be met to ensure complexity bounds. To this end, we develop a theory of model expansion with mixed meta-finite structures. Our main result is that model expansion for a suitable variant of first-order logic captures NP in the presence of arithmetical secondary structures. This is a generalization of a result of Grädel and Gurevich for meta-finite structures to meta-finite structures with mixed relations. We explain how to further generalize our result by using inductively definable guards.**[Yu and Zhang, 2007]**

Author(s): Jun Yu and Hao Zhang.

Title: . A prototype sketch-based architectural design system with behavior mode.

Technical Report: TR 2007-25, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 11 2007.

Abstract: Widely used for architectural design, CAD systems such as AutoCAD, provide powerful modeling functionalities. However, they are often too complex for an architect to use, especially during the conceptual design stage involving primarily sketching of ideas. Also, interactions are typically driven by mouse and keyboard, but these devices cannot be used as freely as a pencil. For an architect, a tablet PC with stylus provides an ideal interface. In this paper, we study sketch-based architectural design and modeling and a prototype system which respects simplicity and human behavior mode (i.e., our natural habits of doing things) is described. Our system provides a seamlessly integrated design/draw environment with intuitive and focused interaction built in, both in an attempt to provide a design environment that closely conforms to the architects design habits. As a proof of concept, traditional PC is still used and the stylus is simulated by a mouse, which is responsible for all system operations.**[Zhang and Kameda, 2007]**

Author(s): John Z. Zhang and Tsunehiko Kameda.

Title: . Linear-time algorithm for finding all door locations that make a room searchable.

Technical Report: TR 2007-27, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, nov 2007.

Abstract: A room is a simple polygon with a prespecified point, called the door, on its boundary. Search may be conducted by two guards on the boundary who keep mutual visibility at all times, or by a single boundary searcher with a flashlight. Search starts at the door, and must detect any intruder that was in the room at the time the search started, preventing the intruder from escaping through the door. A room may or may not be searchable, depending on where the door is placed or no matter where the door is placed. A problem of interest is to find all intervals on the boundary where the door can be placed for the resultant room to be searchable. It is known that this problem can be solved in O(n log n) time, if the given polygon has n sides. We improve this complexity to O(n).

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Ahmadi and Hefeeda, 2006]**

Author(s): Hossein Ahmadi and Mohamed Hefeeda.

Title: . Energy-efficient protocol for deterministic and probabilistic coverage in sensor networks.

Technical Report: TR 2006-21, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2006.

Abstract: We consider coverage problems under a more general sensing model than the disk model commonly employed in the literature. First, we propose a simple and general probabilistic sensing model to leverage some of the sensing capacity ignored by the disk sensing model. This model can be used to guide the design and analysis of future probabilistic coverage protocols. Simulation results using this model, with conservative parameters, show that savings of up to 30% in the number of activated sensors could be achieved over the disk model. Then, we propose a new, fully distributed, probabilistic coverage protocol based on the general sensing model. We prove that our protocol correctly covers a given area, and we provide bounds on convergence time, message complexity, and number of nodes activated by our protocol. Also, we prove the algorithm provides connectivity for large enough communication ranges. We implement our protocol and compare it against another probabilistic coverage protocol in the literature. Our experimental study shows that our protocol: (i) converges an order of magnitude faster, (ii) activates at least 50% less sensors while maintaining the same level of coverage, and (iii) consumes much less energy than the other protocol. Finally, we propose a deterministic coverage protocol that works under the common disk model. This protocol is obtained by setting a few parameters in the probabilistic protocol. This simple reduction is made possible because of the flexibility of our probabilistic sensing model. We compare our deterministic protocol against two recent protocols that were shown to outperform others in the literature. Our comparisons indicate that our protocol outperforms the other two in several aspects, including number of activated sensors, convergence time, and total energy consumed.**[Bagheri and Hefeeda, 2006]**

Author(s): Majid Bagheri and Mohamed Hefeeda.

Title: . Randomized k-coverage algorithms for dense sensor networks.

Technical Report: TR 2006-22, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2006.

Abstract: We propose new algorithms to achieve k-coverage in dense sensor networks. In such networks, covering sensor locations approximates covering the whole area. However, it has been shown that selecting the minimum set of sensors to activate from an already deployed set of sensors is NP-hard. We propose an efficient approximation algorithm which achieves a solution of size within a logarithmic factor of the optimal. We prove that our algorithm is correct and analyze its complexity. We implement our algorithm and compare it against two others in the literature. Our results show that the logarithmic factor is only a worst case upper bound and the solution size is within a constant factor of the optimal in most cases. Our comparison reveals that our algorithm is about four order of magnitude faster than the others. In addition, we design and implement a fully distributed version of our algorithm. Comparison with two other distributed algorithms in the literature indicates that our algorithm: (i) converges much faster than the others, (ii) achieves a close to optimal number of active sensors, and (iii) consumes up to one-tenth less energy than other algorithms.**[Bagheri et al., 2006]**

Author(s): Majid Bagheri, Mohamed Hefeeda, and Hossein Ahmadi.

Title: . A near optimal k-coverage algorithm for large-scale sensor networks.

Technical Report: TR 2006-10, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2006.

Abstract: In sensor networks, k-coverage means that each point in a monitored area must be within the sensing range of at least k sensors. Finding the minimum number of sensors to achieve k-coverage is NP-hard [CIQC02, ZDG04]. We propose an efficient approximation algorithm for k-coverage which achieves a solution of size within a logarithmic factor of the optimal solution. We prove that our algorithm is correct and analyze its complexity. Furthermore, we show how our algorithm can: (i) cover arbitrarily-shaped areas, (ii) provide different degrees of coverage at different spots, and (iii) tolerate deployment inaccuracies. Most of these scenarios can not be handled by previous algorithms. We also implement our algorithm and compare it against others in the literature. Our experimental results indicate that the algorithm is very scalable: Coverage for an area of size 10km^{2}with sensors of range 15m is obtained within a few seconds on a commodity PC. Our results also show that the logarithmic factor is only a worst case upper bound and the solution is within a constant factor of the optimal in most cases.**[Bhattacharya et al., 2006]**

Author(s): B. Bhattacharya, J. Z. Zhang, Q. Shi, and T. Kameda.

Title: . An optimal solution to room search problem.

Technical Report: TR 2006-16, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2006.

Abstract: A room is a simple polygon with a prespecified point, called the door, on its boundary. A search starts at the door and must detect any intruder that may be in the room, while making sure that no intruder escapes through the door during the search. Depending on where the door is placed, such a room may or may not be searchable. The two-guard search model is considered, where the two guards are required to be always mutually visible during a search. We present a simple characterization of searchable rooms under such a search model. In addition, we propose a linear algorithm to check a room's searchability.**[Bilenky et al., 2006]**

Author(s): M. Bilenky, L. Hafer, and A. Kononov.

Title: . A complete description of the scheduling polytope for two activities with independent start and finish times.

Technical Report: TR 2006-19, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, July 2006.

Abstract: In this paper we develop a complete set of facets for the two-activity scheduling polytope P where the duration of the activities is not known*a priori*. The start and finish time of each activity is modelled with a continuous variable. The set of facets is shown to depend on the values of the upper and lower bounds on the start and finish times.**[Buresh-Oppenheim and Mitchell, 2006]**

Author(s): Josh Buresh-Oppenheim and David Mitchell.

Title: . Minimum 2-cnf resolution refutations in poylnomial time.

Technical Report: TR 2006-26, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 2006.

Abstract: We present an algorithm for finding a smallest Resolution refutation of any 2CNF in polynomial time.**[Drew et al., 2006]**

Author(s): Mark S. Drew, Cheng Lu, and Graham D. Finlayson.

Title: . Ambient shadow detection and removal via flash/noflash pairs.

Technical Report: TR 2006-05, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 2006.

Abstract: Flash/noflash pairs have been studied as a simple mechanism for supplying less-noisy information for dark areas of an ambient-light image. This research direction also includes image pairs or sequences of nighttime or obscured imagery such as surveillance video. Ambient imagery has some advantages over flash images, in general, in that a flash can create harsh or unpleasing effects, and can also generate additional shadowing. The extra shadows from the flash are in fact quite amenable to detection. However, filling in information in image areas occupied by*shadows in the ambient image*has not been effectively considered. Here we consider the problem of detecting shadows in the image taken under ambient light, given extra information from a flash image registered with the first. Clearly, the pure-flash image is the difference between the two – and has no ambient-shadow component. But the question of how best to detect the ambient shadow remains. We argue that first going to a ``spectrally sharpened'' color space, and then focusing on the difference in a log domain of the flash image and the ambient image, gives a very simple feature space consisting of two components – one in an illuminant-change 3-vector direction, and one along the gray axis. This space provides excellent separation of the shadow and nonshadow areas. Regressing pixel data from the flash image to the nonflash one adjusts the scale properly, and then inserting edges from the flash image inside the ambient-shadow region into the ambient image edge map and inverting Poisson's equation fills in the shadow. In this way, we arrive at an image with the advantages of the ambient-only image – warmth, no flash effects such as disturbing illumination dropoff with distance and pixel saturation etc. – but no shadows.**[Farahbod et al., 2006]**

Author(s): R. Farahbod, V. Gervasi, U. Glässer, and M. Memon.

Title: . Design and specification of the coreasm execution engine, part 1: The kernel.

Technical Report: TR-2006-09, Simon Fraser University, May 2006.

Abstract: The CoreASM project, an attempt to make*abstract state machines*(ASMs) executable, was first introduced in [FaGeGl05a,FaGeGl05b]. The aim of this project is to specify and implement an extensible ASM execution engine for an extensible language that is as close as possible to the mathematical definition of pure ASMs. This document focuses on the design of the Kernel of the CoreASM engine and the bare essential structures of the CoreASM language and provides an update on the latest achievements and improvements.**[Gao and Ester, 2006]**

Author(s): Byron J. Gao and Martin Ester.

Title: . Turning clusters into patterns: Rectangle-based discriminative data description.

Technical Report: TR 2006-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 2006.

Abstract: The ultimate goal of data mining is to extract knowledge from massive data. Knowledge is ideally represented as human-comprehensible patterns from which end-users can gain intuitions and insights. Yet not all data mining methods produce such readily understandable knowledge, e.g., most clustering algorithms output sets of points as clusters. In this paper, we perform a systematic study of cluster description that generates interpretable patterns from clusters. We introduce and analyze novel description formats leading to more expressive power, motivate and define novel description problems specifying different trade-offs between interpretability and accuracy. We also present effective heuristic algorithms together with their empirical evaluations.**[Gao et al., 2006]**

Author(s): Byron J. Gao, Obi L. Griffith, Martin Ester, and Steven J. M. Jones.

Title: . Discovering significant OPSM subspace clusters in massive gene expression data.

Technical Report: TR 2006-18, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2006.

Abstract: Order-preserving submatrixes (OPSMs) have been accepted as a biologically meaningful subspace cluster model, capturing the general tendency of gene expressions across a subset of conditions. In an OPSM, the expression levels of all genes induce the same linear ordering of the conditions. Discovery of significant OPSMs can play an essential role in inferring gene regulatory networks. OPSM mining is reducible to a special case of the sequential pattern mining problem, in which a pattern and its supporting sequences uniquely specify an OPSM subspace cluster. Those small*twig clusters*, specified by long patterns with naturally low support, incur explosive computational costs and would be completely pruned off by most existing methods for massive datasets containing thousands of conditions and hundreds of thousands of genes, which are common in today's gene expression analysis. However, it is in particular interest of biologists to reveal such small groups of genes that are tightly coregulated under many conditions, and some pathways or processes might require only two genes to act in concert. In this paper, we introduce the*KiWi*mining framework for massive datasets, that exploits two parameters k and w to provide a biased testing on a bounded number of candidates, substantially reducing the search space and problem scale, targeting on highly promising seeds that lead to significant clusters and twig clusters. Extensive biological and computational evaluations on real datasets demonstrate that*KiWi*can effectively mine biologically meaningful OPSM subspace clusters with good efficiency and scalability.**[Ge et al., 2006]**

Author(s): Rong Ge, Martin Ester, Wen Jin, and Zengjian Hu.

Title: . Disc-based data summarization: Problems and algorithms.

Technical Report: TR 2006-07, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, Mar 2006.

Abstract: Data summarization has been recognized as a fundamental operation in database systems and data mining with important applications such as data compression and privacy preservation. While the existing methods such as CF-values and DataBubbles may perform reasonably well, they cannot provide any guarantees on the quality of their results. In this paper, we introduce a summarization approach for numerical data based on discs formalizing the notion of quality. Our objective is to find a minimal set of discs, i.e. spheres satisfying a radius and a signicance constraint, covering the given dataset. Since the proposed problem turns out to be NP-complete, we design two dierent approximation algorithms. These algorithms have a quality guarantee, but they do not scale well to large databases. However, the machinery from approximation algorithms allows a precise characterization of a further, heuristic algorithm. This heuristic, efficient algorithm exploits multi-dimensional index structures and can be well-integrated with database systems. Our experimental evaluation demonstrates that our heuristic algorithm generates summaries that outperform the state-of-the-art Data Bubbles in terms of internal measures as well as in terms of external measures when using the data summaries as input for clustering methods.**[Glässer et al., 2006a]**

Author(s): Uwe Glässer, Sarah Rastkar, and Mona Vajihollahi.

Title: . Computational modeling and experimental validation of aviation security procedures.

Technical Report: TR 2006-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 2006.

Abstract: Civil aviation security encompasses a variety of protective measures related to airport and aircraft security which are regulated by regional, national and international standards and recommended practices. Due to the very nature of natural language, these informal requirements lack precision and are inappropriate for validation and verification of resulting properties by computational means. We propose here a novel approach to computational modeling and experimental validation of aviation security combining*abstract state machine*(ASM) specification techniques with symbolic*model checking*(MC) techniques. Specifically, we use a*probabilistic*variant of the ASM computation model in combination with probabilistic MC techniques as a computational approach to establishing the consistency, coherence, and completeness of procedural security requirements.**[Glässer et al., 2006b]**

Author(s): Uwe Glässer, Sarah Rastkar, and Mona Vajihollahi.

Title: . Modeling and analysis of aviation security procedures.

Technical Report: TR 2006-23, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 2006.

Abstract: Civil aviation security encompasses a variety of protective measures related to airport and aircraft security which are regulated by regional, national and international standards and recommended practices. Due to the very nature of natural language, these informal requirements lack precision and are inappropriate for validation and verification of resulting properties by computational means. We propose here a novel approach to computational modeling and experimental validation of aviation security combining*abstract state machine*(ASM) specification techniques with symbolic*model checking*(MC) techniques. Specifically, we use a*probabilistic*variant of the ASM computation model in combination with probabilistic MC techniques as a computational approach to establishing the consistency, coherence, and completeness of procedural security requirements.**[Hsu and Hefeeda, 2006a]**

Author(s): ChengHsin Hsu and Mohamed Hefeeda.

Title: . On the accuracy and complexity of rate-distortion models for fgs-encoded video sequences.

Technical Report: TR 2006-12, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2006.

Abstract: Rate-distortion (R-D) models are functions that describe the relationship between the bitrate and expected level of distortion in the reconstructed video stream. R-D models enable optimization of the received video quality in different network conditions. Several R-D models have been proposed for, the increasingly becoming popular, fine- grained scalable video sequences. However, the models' relative performance has not been thoroughly analyzed. Moreover, the time complexity of each model is not known, nor is the range of bitrates in which the model produces valid results. This lack of quantitative performance analysis makes it difficult to select the model that best-suits a target streaming system. In this paper, we classify, analyze, and rigorously evaluate all R-D models proposed for FGS coders in the literature. We classify R-D models into three categories: analytic, empirical, and semi-analytic. We describe the characteristics of each category. We analyze the R-D models by following their mathematical derivations, scrutinizing the assumptions made, and explaining when the assumptions fail and why. In addition, we implement all R-D models, a total of eight, and evaluate them using a diverse set of video sequences. In our evaluation, we consider various source characteristics, diverse channel conditions, different encoding/decoding parameters, different frame types, and several performance metrics including accuracy, range of applicability, and time complexity of each model. We also present clear systematic ways (pseudo codes) for constructing various R-D models from a given video sequence. Based on our experimental results, we present a justified list of recommendations on selecting the best R-D models for video-on-demand, video conferencing, real-time, and peer-to-peer streaming systems.**[Hsu and Hefeeda, 2006b]**

Author(s): ChengHsin Hsu and Mohamed Hefeeda.

Title: . Optimal bit allocation for fine-grained scalable video sequences in distributed streaming environments.

Technical Report: TR 2006-20, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, July 2006.

Abstract: We present optimal schemes for allocating bits of fine- grained scalable video sequences among multiple senders streaming to a single receiver. The senders are heterogeneous in their outgoing bandwidth and they hold different portions of the video stream. We first formulate and optimally solve the problem for individual frames, then we generalize to the multiple frame case. Specifically, we formulate the allocation problem as an optimization problem with an objective function to minimize the distortion. By using the piece-wise linear rate-distortion model, we transform the general (nonlinear) optimization problem into an integer linear programming (ILP) problem. We further design a simple rounding scheme that transforms the ILP problem into a linear programming (LP) one, which could be solved efficiently using common optimization techniques such as the Simplex method. We prove that our rounding scheme always produces a feasible solution, and the solution is within a negligible margin from the optimal one. We also propose a new algorithm (FGSAssign) for the single-frame allocation problem that runs in O(n log n) steps, where n is the number of senders. We prove that FGSAssign is optimal. Furthermore, we propose a heuristic algorithm (mFGSAssign) that produces near-optimal solutions for the multiple-frame case, and runs an order of magnitude faster than the optimal one. Our extensive experimental study shows the effectiveness of our allocation algorithms in improving the video playout quality.**[Kameda et al., 2006]**

Author(s): T. Kameda, J. Z. Zhang, and M. Yamashita.

Title: . Characterization of polygons searchable by a boundary 1-searcher.

Technical Report: TR 2006-15, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2006.

Abstract: Suppose intruders are in a dark polygonal room and they can move at a finite but unbounded speed, trying to avoid detection. A boundary 1-searcher can move along the polygon boundary, equipped with a flash light that she can direct in any direction. A polygon is*searchable*if there is a schedule for the searcher to follow in order to always detect the intruders. We identify three forbidden patterns of a polygon such that a given polygon is searchable by a boundary 1-searcher if and only if it has none of them.**[Li and Zhang, 2006]**

Author(s): John Li and Hao Zhang.

Title: . Guaranteed non-obtuse remeshing and mesh decimation via constrainted optimizations.

Technical Report: TR 2006-13, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2006.

Abstract: The problem of nonobtuse triangulation has been studied in the 2D domains, however, guaranteed nonobtuse remeshing of curve surfaces is still an open problem. Nonobtuse meshes are desirable in certain situations such as geodesic computations and planar mesh embedding. In this paper, we propose a solution to nonobtuse remeshing and nonobtuse decimation. Our approach utilizes a 'deform-to-fit' strategy to solve the remeshing problem. We first construct an initial nonobtuse mesh, via a modified version of the Marching Cubes algorithm, that roughly approximates the input mesh. An iterative constrained optimization is then applied to obtain a high quality approximation of the input mesh. At each iteration, the constraints ensure the output mesh is guaranteed nonobtuse. By making use of quadric-based errors, we iteratively decimate the high-detail nonobtuse mesh in a similar constrained manner to obtain a hierarchy of nonobtuse meshes.**[Liu et al., 2006]**

Author(s): Rong Liu, Hao Zhang, and Oliver van Kaick.

Title: . An investigation into spectral sequencing using graph distance.

Technical Report: TR 2006-08, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2006.

Abstract: The construction of linear mesh layouts has found various applications, such as implicit mesh filtering and mesh streaming, where a variety of layout quality criteria, e.g., span and width, can be considered. While spectral sequencing, derived from Fiedler vector, is one of the best-known heuristics for minimizing width, it does not perform as well as Cuthill-Mckee (CM) scheme in terms of span, and vice versa. In this paper, we cast optimal mesh layout generation as a distance-preserving 1-D projection problem in the context of kernel principal component analysis and propose to use the subdominant eigenvector of an kernel (affinity) matrix. Despite of the non-sparsity of the affinity operators we use, the layouts can be computed efficiently for large meshes through subsampling and eigenvector extrapolation. Our experiments show that the new mesh layouts obtained outperform sequences derived from the Fiedler vector, in terms of spans, and sequences obtained from CM, in terms of widths and other important quality criteria. Therefore, in applications where several such quality criteria can influence algorithm performance at the the same time, e.g., for mesh streaming and implicit mesh filtering, the new mesh layouts would provide a better trade-off.**[Mandryk and Atkins, 2006]**

Author(s): Regan L. Mandryk and M. Stella Atkins.

Title: . A fuzzy physiological approach for continuously modeling emotion during interaction with play technologies.

Technical Report: TR 2006-06, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 2006.

Abstract: The popularity of computer games has exploded in recent years, yet methods of evaluating user emotional state during play experiences lag far behind. There are few methods of assessing emotional state, and even fewer methods of quantifying emotion during play. This paper presents a novel method for continuously modeling emotion using physiological data. A fuzzy logic model transformed four physiological signals into arousal and valence. A second fuzzy logic model transformed arousal and valence into five emotions: boredom, challenge, excitement, frustration, and fun. Modeled emotions compared favorably with a manual approach, and the means were also evaluated with subjective self-reports, exhibiting the same trends as reported emotions for fun, boredom, and excitement. This approach provides a method for quantifying emotional states continuously during a play experience.**[Mitchell et al., 2006]**

Author(s): David Mitchell, Eugenia Ternovska, Faraz Hach, and Raheleh Mohebali.

Title: . A framework for modelling and solving search problems.

Technical Report: TR 2006-24, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 2006.

Abstract: We propose a framework for modelling and solving search problems using logic, and describe a project whose goal is to produce practically effective, general purpose tools for representing and solving search problems based on this framework. The mathematical foundation lies in the areas of finite model theory and descriptive complexity, which provide us with many classical results, as well as powerful techniques not available to many other approaches with similar goals. We describe the mathematical foundations; explain an extension to classical logic with inductive definitions that we consider central; give a summary of complexity and expressiveness properties; describe an approach to implementing solvers based on grounding; present grounding algorithms based on an extension of the relational algebra; describe an implementation of our framework which includes use of inductive definitions, sorts and order; and give experimental results comparing the performance of our implementation with ASP solvers and another solver based on the same framework.**[Saleh and Hefeeda, 2006]**

Author(s): Osama Saleh and Mohamed Hefeeda.

Title: . Modeling and caching of peer-to-peer traffic.

Technical Report: TR 2006-11, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2006.

Abstract: Peer-to-peer (P2P) file sharing systems generate a major portion of the Internet traffic, and this portion is expected to increase in the future.We explore the potential of deploying caches in different Autonomous Systems (ASes) to reduce the cost imposed by P2P traffic on Internet service providers and to alleviate the load on the Internet backbone. We conduct a measurement study to analyze P2P characteristics that are relevant to caching, such as object popularity and size distributions. Our study shows that the popularity of P2P objects in different ASes can be modeled by a Mandelbrot-Zipf distribution, and that several workloads exist in P2P systems, each could be modeled by a Beta distribution. Guided by our findings, we develop a novel caching algorithm for P2P traffic that is based on object segmentation and partial admission and eviction of objects. Our trace-based simulations show that with a relatively small cache size, a byte hit rate of up to 35% can be acheived by our algorithm, which is a few percentages smaller than that of the off-line optimal algorithm. Our results also show that our algorithm achieves a byte hit rate that is at least 40 more and at most triple the byte hit rate of the common web caching algorithms. Furthermore, our algorithm is robust in face of aborted downloads–a common case in P2P systems–and achieves a byte hit rate that is even higher than in the case of full downloads.**[Schulte and Drew, 2006]**

Author(s): Oliver Schulte and Mark S. Drew.

Title: . An algorithmic proof that the family conservation laws are optimal for the current reaction data.

Technical Report: TR 2006-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 2006.

Abstract: We describe a machine-learning system that uses linear vector-space based techniques for inference from observations to extend previous work on model construction for particle physics [valdes-perez96:_system_gener_const_model_partic_famil, valdes-perez94:_system_induc_parsim_phenom_conser_laws, kocabas91:_confl_resol_discov_partic_physic]. The program searches for quantities conserved in all reactions from a given input set; given current data it rediscovers the family conservation laws: baryon#, electron#, muon# and tau#. We show that these families are uniquely determined by the data; they are the only particle families that correspond to a complete set of selection rules.**[Tauber et al., 2006]**

Author(s): Zinovi Tauber, Ze-Nian Li, and Mark S. Drew.

Title: . Review and preview: Disocclusion by inpainting for image-based rendering.

Technical Report: TR 2006-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 2006.

Abstract:*Image-based rendering*takes as input multiple images of an object and generates*photorealistic*images from novel viewpoints. This approach avoids explicitly modeling scenes by replacing the modeling phase with an object reconstruction phase. Reconstruction is achieved in two possible ways: recovering 3D point locations using multiview stereo techniques, or reasoning about consistency of each voxel in a discretized object volume space. The most challenging problem for image-based reconstruction is the presence of occlusions. Occlusions make reconstruction ambiguous for object parts not visible in any input image. These parts must be reconstructed in a visually acceptable way. This review presents our insights for image inpainting to provide both attractive reconstruction and a framework increasing the accuracy of depth recovery.*Digital image inpainting*refers to any methods that fill-in holes of arbitrary topology in images so that they seem to be part of the original image. Available methods are broadly classified as structural inpainting or textural inpainting. Structural inpainting reconstructs using prior assumptions and boundary conditions, while textural inpainting only considers available data from texture exemplars or other templates. Of particular interest is research of structural inpainting applied to 3D models, impressing its effectiveness for disocclusion.**[Wang et al., 2006]**

Author(s): Pengpeng Wang, Ramesh Krishnamurti, and Kamal Gupta.

Title: . View planning with combined viewing and traveling costs.

Technical Report: TR 2006-17, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2006.

Abstract: In this report, we tackle the problem of view planning with travel cost, denoted by Traveling VPP. It refers to planning a sequence of sensing actions with minimum total cost by a robot-sensor system to completely inspect the surfaces of objects with known geometries in a known workspace. The cost to minimize is a combination of the view cost, proportional to the number of viewpoints planned, and the travel cost for the robot to realize them. We show that the Traveling VPP is equivalent to Set Covering on a Graph via reductions from both directions. We use the recent result on the hardness of approximations to show that Traveling VPP is poly-log inapproximable. We give a linear program based rounding algorithm that achieves an approximation ratio of the order of view frequency. Also, we show, via a reduction to the group Steiner tree problem, that the existing approximation algorithm applies to Traveling VPP, thereby providing a poly- log approximation ratio. This parallels the approximation ratio results to the set covering problem, i.e., the best approximation ratio for the set covering problem is either the frequency or the logarithm of the number of elements, whichever is smaller. We then show that our rounding algorithm has a similar frequency factor approximation ratio for other related combinatorial optimization problems by giving the necessary reductions. We also consider several generalizations of Traveling VPP for more realistic models, where image registration constraints and visibility range and angle constraints are accounted for.**[Wong et al., 2006a]**

Author(s): Raymond Chi-Wing Wong, Ada Wai-Chee Fu, Jian Pei, Ke Wang, Steven Chi-Wen Wan, and C. S. Lo.

Title: . Multidimensional k-anonymization by linear clustering using space-filling curves.

Technical Report: TR 2006-27, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 2006.

Abstract: Anonymization has been well-recognized as an effective approach to protect privacy. In this paper, we study the problem of k-anonymization of numeric data. We propose a simple yet effective approach to this problem. We transform a data set with a multidimensional numeric quasi-identifier into a data set with a one-dimensional numeric quasi-identi er using space-filling curves. Effecacious algorithms are proposed to compute the k-anonymization on the transformed data set. The k-anonymized data set is then transformed back to the original multidimensional space with the k-anonymity preserved. Our empirical studies on both real data sets and synthetic data sets show that, compared to the state-of-the-art methods, our method achieves a smaller distortion and is more efficient and scalable.**[Wong et al., 2006b]**

Author(s): Raymond Chi-Wing Wong, Ada Wai-Chee Fu, Ke Wang, and Jian Pei.

Title: . Minimality attack in privacy preserving data publishing.

Technical Report: TR 2006-28, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, October 2006.

Abstract: Data publishing generates much concern over the protection of individual privacy. In the well-known kanonymity model and the related models such as l-diversity and (α,k)-anonymity, the adversary is assumed to possess knowledge about an external table with information of the quasi-identifiers of individuals. In this paper, we show that knowledge of the mechanism or algorithm of anonymization for data publication can also lead to extra information that assists the adversary and jeopardizes individual privacy. In particular, all known mechanisms try to minimize information loss and such an attempt provides for a loophole for attacks. We call such an attack a minimality attack. In this paper, we propose a model called mcon identiality which deals with the individual privacy issue with the consideration of minimality attacks. Though the problem of optimal m-confidentiality anonymization is NP-hard, we propose an algorithm which generates m-confidential data sets efficiently. We also conducted experiments to show how such an attack can succeed on a real dataset and that our algorithm suffers almost no penalty in information loss.**[Zhang and Kameda, 2006]**

Author(s): J. Z. Zhang and T. Kameda.

Title: . Where to build a door.

Technical Report: TR 2006-14, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2006.

Abstract: A room is a simple polygon with a prespecified point, called the door, on its boundary. Search starts at the door, and must detect any intruder that may be in the room, while making sure that no intruder escapes through the door during the search. Depending on where the door is placed, such a room may or may not be searchable. We present an efficient algorithm that can determine all the points on the boundary where the door should be placed in order for the polygon to be searchable by two guards on the boundary who keep mutual visibility at all times, or a single searcher with a flashlight. We extensively make use diagrams similar to the Visibility Obstruction Diagram (VOD).

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Aggarwal and Pei, 2005]**

Author(s): Charu C. Aggarwal and Jian Pei.

Title: . A framework for adversarial privacy preserving data mining.

Technical Report: TR 2005-08, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 2005.

Abstract: In recent years, privacy preserving data mining has become an important topic because of the widespread proliferation of demographic and sensitive data. A rudimentary way to perform privacy preserving data mining is to simply hide the information in some of the sensitive fields picked by a user. However, such a method is far from satisfactory in its ability to perform privacy preserving data mining. Real data records are not randomly distributed. As a result, some fields in the records may be correlated with one another. If the correlation is sufficiently high, it may be possible to guess some of the sensitive fields using other fields. In this paper, we study the problem of adversarial privacy preserving data mining, which is to hide a minimal set of entries so that the privacy of the sensitive fields are satisfactorily preserved. We model the problem concisely and show that finding an optimal solution to the problem is NP-hard. In addition, we develop an efficient heuristic algorithm which can find good solutions in practice. An extensive performance study is conducted on both synthetic and real data sets to examine the effectiveness of our approach.**[Batu et al., 2005]**

Author(s): Tugkan Batu, Funda Ergun, and Cenk Sahinalp.

Title: . Oblivious string embeddings and edit distance approximations.

Technical Report: TR 2005-11, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2005.

Abstract: We introduce an oblivious embedding that maps any string of length n to a string of length at most n/r for any user specified value of r. For any given r, our embedding provides a distortion of tilde O(r^{1+μ}) for some μ=o(1) under edit distance, which we prove to be (almost) optimal. The embedding can be computed in tilde O(2^{1/μ}n) time. We also show how to use the main ideas behind the construction of our embedding to obtain an efficient algorithm for approximating the edit distance between two strings. More specifically, for any 1> ε geq 0, we describe an algorithm to compute the edit distance between two strings S and R in time tilde O(n^{1+ε}), within an approximation factor of min {n^{ frac 1-ε3+o(1)},( ed (S,R)/n^ε)^{ frac 12+o(1)}}. For the case of ε=0, we get a tilde O(n)-time algorithm that approximates the edit distance within a factor of min {n^{ frac 13+o(1)}, ed (S,R)^{ frac 12+o(1)}}, improving the recent result of Bar-Yossef et al. [BarYossefJKK04].**[Brantingham et al., 2005a]**

Author(s): Patricia L. Brantingham, Uwe Glässer, Bryan Kinney, Komal Singh, and Mona Vajihollahi.

Title: . A computational model for simulating spatial and temporal aspects of crime in urban environments.

Technical Report: TR 2005-10, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 2005.

Abstract: In this paper, we present a novel approach to computational modeling of social systems. By combining the abstract state machine (ASM) formalism with the multi-agent modeling paradigm, we obtain a formal semantic framework for modeling and integration of established theories of crime analysis and prediction. We focus here on*spatial*and*temporal*aspects of crime in urban areas. Our work contributes to a new multidisciplinary research effort broadly classified as Computational Criminology.**[Brantingham et al., 2005b]**

Author(s): Patricia L. Brantingham, Uwe Glässer, Komal Singh, and Mona Vajihollahi.

Title: . Mastermind: Modeling and simulation of criminal activity in urban environments.

Technical Report: TR 2005-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2005.

Abstract: Based on the abstract state machines (ASM) formalism in combination with a multi-agent based modeling paradigm, we devise a formal framework for semantic modeling and systematic integration of established theories of crime analysis and prediction. Specifically, we focus here on crime in urban areas and model spatial and temporal aspects of crime potentially involving multiple offenders and multiple targets. Our goal is to provide abstract simulation models that can serve as effective instruments in analysis and prevention of crime.**[Brantingham et al., 2005c]**

Author(s): Patricia L. Brantingham, Uwe Glässer, Komal Singh, and Mona Vajihollahi.

Title: . Mastermind: Modeling and simulation of criminal activity in urban environments.

Technical Report: TR 2005-14, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, July 2005. Revised version of SFU-CMPT-TR-2005-01, Feb 2005.

Abstract: Based on the abstract state machines (ASM) formalism in combination with a multi-agent based modeling paradigm, we devise a formal framework for semantic modeling and systematic integration of established theories of crime analysis and prediction. Specifically, we focus here on crime in urban areas and model*spatial*and*temporal*aspects of crime potentially involving multiple offenders and multiple targets. Our goal is to provide abstract simulation models that can serve as effective instruments in analysis and prevention of crime.**[Delgrande et al., 2005]**

Author(s): James Delgrande, Daphne H. Liu, Torsten Schaub, and Sven Thiele.

Title: . Coba 2.0: A consistency-based belief change system.

Technical Report: TR 2005-17, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 2005.

Abstract: We describe COBA 2.0, an implementation of a consistency- based framework for expressing belief change, focusing on revision and contraction (possibly) incorporating integrity constraints. This general framework was first proposed in [DS03], in which the authors argued that the revision and contraction operators satisfy most of the AGM postulates [AGM85] while being amenable to implementation. In this paper, following a review of the work of [DS03], we present COBA 2.0's high-level algorithm, work through several examples, and discuss our experimental results. A distinguishing feature of COBA 2.0 is that it builds on SAT-technology by using a module comprising a state-of-the-art SAT-solver for consistency checking.**[Drew et al., 2005]**

Author(s): Mark S. Drew, Tim K. Lee, and Andrew Rova.

Title: . Shape retrieval with eigen-css search.

Technical Report: TR 2005-07, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2005.

Abstract: Shape retrieval programs are comprised of two components: shape representation, and matching algorithm. Building the representation on scale space filtering and the curvature function of a closed boundary curve, curvature scale space (CSS) has been demonstrated to be a robust 2D shape representation. The adoption of the CSS image as the default in the MPEG-7 standard, using a matching algorithm utilizing the contour maxima of the CSS image, makes this feature of interest perforce. In this paper, we propose a framework for a novel approach to both representing and matching the CSS feature. Our contribution consists of three novel steps, each of which effects a profound speedup on CSS image matching. Each step is a well-known technique in other domains, but the proposed concatenation of steps leads to a novel approach to this subject which captures shape information much more efficiently. Firstly, the standard algorithm for the feature involves a complicated and time-consuming search, since the zero of arc length is not known in any new contour. Here, we first obviate this search via a phase correlation transform in the spatial dimension of the CSS image. Then, using experience derived from medical imaging, we define a set of marginal features summarizing the transformed image. The resulting feature space is amenable to dimension reduction via subspace projection methods, with a dramatic speedup in time, and as well orders of magnitude reduction in space. The resultant program has accuracy compatible with the original program, which uses contour maxima. The advantages of the new algorithm are its simplicity and execution speed. Both a class matching evaluation procedure as well as an ROC analysis show the new method to be effective in shape retrieval. Sensitivity analyses also show that the method of using phase correlation on CSS images, followed by generation of the new marginal-sum feature vector plus decorrelation by eigen-analysis, is robust to the dimensionality used, substantively better than using raw CSS images, and invariant to reflections, as well as being simple, fast, and effective.**[Ester et al., 2005]**

Author(s): Martin Ester, Rong Ge, and Wen Jin.

Title: . Location-aware clustering of sensor network data.

Technical Report: TR 2005-13, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2005.

Abstract: Sensor networks are becoming increasingly important in applications such as environmental monitoring or traffic control. They collect large amounts of sensor data (e.g., on temperature and precipitation) in a highly decentralized manner. The special properties of sensor networks distinguish them from traditional database systems and even distributed systems. Hardware characteristics constrain the amount of communication, local computation and storage. Sensor networks have spatial characteristics (location of the sensors) as well as non-spatial characteristics (sensor values measured) that both need to be taken into account for meaningful data mining. Recently, sensor networks have attracted more and more attention from researchers in the networks, algorithms and database communities. To the best of our knowledge, this paper presents the first systematic study of data mining issues in sensor networks. We focus on clustering, a major unsupervised data mining task, and introduce the problem of location-aware clustering of sensor network data. We propose an algorithm for this clustering problem satisfying the given hardware constraints. We believe that the same algorithmic scheme can be applied to solve also other data mining tasks in sensor networks. Our experimental evaluation using synthetic and real data demonstrates the effectiveness and efficiency of the proposed algorithm.**[Farahbod et al., 2005a]**

Author(s): R. Farahbod, V. Gervasi, and U. Glässer.

Title: . Design and specification of the coreasm execution engine.

Technical Report: TR 2005-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2005.

Abstract: In this report we introduce a new research effort in making abstract state machines executable. The aim is to specify and implement an execution engine for a language that is as close as possible to the mathematical definition of pure ASM. The paper presents the general architecture of the engine, together with a high-level description of the extensibility mechanisms that are used by the engine to accommodate arbitrary backgrounds, scheduling policies, and new rule forms.**[Farahbod et al., 2005b]**

Author(s): R. Farahbod, U. Glässer, and M. Vajihollahi.

Title: . Abstract operational semantics of the business process execution language for web services.

Technical Report: TR 2005-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2005. Revised version of SFU-CMPT-TR-2004-03, April 2004.

Abstract: We formally define an abstract operational semantics for the Business Process Execution Language for Web Services (BPEL), a forthcoming industrial standard for automated business processes. Based on the*abstract state machine*(ASM) paradigm, we model the dynamic properties of the key language constructs through the construction of a*BPEL Abstract Machine*in terms of partially ordered runs of distributed real-time ASMs. Specifically, we focus here on the*process execution model*and the underlying*execution lifecycle*of BPEL activities. The goal of our work is to provide a well defined semantic foundation for establishing the key language attributes by eliminating deficiencies hidden in the informal language definition. The resulting abstract machine model provides a comprehensive formalization of the BPEL dynamic semantics at various levels of abstraction including an executable formal semantics.**[Hafer, 2005]**

Author(s): L. Hafer.

Title: . Dylp: a dynamic lp code.

Technical Report: TR 2005-18, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, Dec 2005.

Abstract: Dylp is a full implementation of the dynamic simplex algorithm for linear programming. Dynamic simplex attempts to maintain a reduced active constraint system by regularly purging loose constraints and variables with unfavourable reduced costs, and adding violated constraints and variables with favourable reduced costs. In abstract, the code alternates between primal and dual simplex algorithms, using dual simplex to reoptimise after updating the constraint set and primal simplex to reoptimise after updating the variable set.**[Kameda et al., 2005]**

Author(s): Tsunehiko Kameda, Yi Sun, and Luis Goddyn.

Title: . An optimization problem related to truncated harmonic broadcasting for video-on-demand.

Technical Report: TR 2005-06, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2005.

Abstract: Fixed-Delay Pagoda Broadcasting (FDPB) is a practical broadcasting scheme for video-on-demand proposed by Paris, and its prototype has been built. In order for any broadcasting scheme to be truly practical, it is crucial that the required bandwidth be minimized. Some heuristic is used to compute the optimal number s of subchannels used by FDPB to minimize the required bandwidth, but it does not always generate the optimal solution. In order to find the truly optimal s, the only known algorithm requires to try close to 0.5m different potential solutions for s, where m is a parameter of FDPB which can be rather large. This paper analyzes the combinatorial optimization problem involved in detail and limits the search space for the optimal s down to κ x sqrt m different values, where κ is approximately 0.65. This reduces the running time of a computer program to compute the optimal value for s from a few hours down to a few seconds when m=10,000. This impressive reduction is more than what the ratio 0.5m/(κ x sqrt m) would suggest, because the computation time depends on the values of s tested. The technique used in this paper could be applied to other similar problems.**[Obradovic et al., 2005]**

Author(s): Nenad Obradovic, Joseph Peters, and Goran Ruzic.

Title: . Efficient domination in circulant graphs with two chord lengths.

Technical Report: TR 2005-16, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, October 2005.

Abstract: Domination is an important property in the design of efficient computer interconnection networks. We provide a complete characterization of circulant graphs with two chord lengths that admit an efficient dominating set. In particular, for 3-regular and 4-regular circulant graphs, we give necessary and sufficient conditions for the existence of efficient dominating sets and we describe their exact structure according to the relationship between chords.**[Sun and Kameda, 2005a]**

Author(s): Yi Sun and Tsunehiko Kameda.

Title: . Harmonic block windows scheduling through harmonic windows scheduling.

Technical Report: TR 2005-09, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2005.

Abstract: Broadcasting offers a bandwidth-efficient way of delivering popular content in media-on-demand (MoD) systems. Harmonic windows scheduling (HWS) is a mathematical framework that is relevant to bandwidth minimization in MoD broadcasting. Given c channels, the optimal HWS problem tries to schedule as many pages of a video as possible in the c channels. Let k be the largest integer n satisfying H_n <= c, where H_n is the n-th harmonic number. Then k is an upper bound on the number of pages that can be scheduled in c channels, if the HWS framework is used. Thus bandwidth of at least (c - H_k) is wasted. We have recently generalized HWS to harmonic block windows scheduling (HBWS), which can overcome this limitation inherent in HWS. It divides each page into a set of b equal sized fragments, thus providing a way of scheduling fragments of the same page at different periods. We propose a method that starts with a round-robin tree representing a HWS schedule and improves it to generate a HBWS schedule that achieves a higher ratio N/b, where N is the number of fragments scheduled. For up to six channels, we demonstrate that we can always achieve N/b > k. We also show that as we increase b, N/b approaches the theoretical upper bound.**[Sun and Kameda, 2005b]**

Author(s): Yi (Richard) Sun and Tsunehiko (Tiko) Kameda.

Title: . Harmonic block windows scheduling for video-on-demand.

Technical Report: TR 2005-05, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2005.

Abstract: Harmonic windows scheduling is a mathematical framework recently formulated by Bar-Noy et al., which is applicable to VoD broadcasting. This model assumes c channels, each with bandwidth equal to the display rate b. Unfortunately, the model inherently wastes bandwidth at least equal to (c-H_n)b, where H_m is the m-th harmonic number and n is the largest integer m satisfying H_m <= c. We introduce a new framework, called ``harmonic block windows scheduling'' (HBWS) to overcome this problem by dividing each page into β fragments, and scheduling them independently from each other. By allowing the last (or the n+1st) page to have fewer than β fragments, we can pack more fragments in the c channels. We show that HBWS instances can achieve smaller average waiting time than all other known schemes using uniform channels of bandwidth b. Moreover, the average waiting time approaches the optimum value as β is made larger. Quasi-Harmonic Broadcasting (QHB) can asymptotically achieve the lower bound on server bandwidth. We present HBWS instances that outperform QHB schedules if the same amount of bandwidth and the same number of fragments are used. We also demonstrate that there exist schemes based on the fixed start points policy that can achieve a shorter average initial waiting time than any scheme based on the fixed-delay policy.**[Tauber, 2005]**

Author(s): Zinovi Tauber.

Title: . Disocclusion and photorealistic object reconstruction and reprojection.

Technical Report: TR 2005-12, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2005.

Abstract: Video sequences often need computer graphics assistance in compositing shots that are hard or impossible to obtain from physical scenes. The shots can be composited with artificial objects or new scenes can be constructed entirely from modeled environment. However, modeling complex photorealistic scenes on a computer using geometrical shapes is a very laborious task that generates millions of primitives, is very expensive to render, and is often not realistic enough. In the past decade a new field called image-based rendering has emerged for modeling complex objects and environments from multiple images of a real scene and rendering them from a novel view. Early methods used warping in the screen space, however for more realism it's necessary to get 3D information. Researchers have been investigating how to get depth information using stereo-matching techniques for many years, yet still the 3D reconstruction of an object remains inaccurate in the general case. I will present techniques specically aimed at being able to generate photorealistic novel views. As such, these techniques involve all the steps from object/scene reconstruction equivalent to modeling) to reprojection (rendering). The approach of these techniques is evolving towards modeling all the phenomena that complicates stereo-matching, including photometric effects and occlusions. While some approaches use a form of matching, some more successful forms actually use a projection of a discretized scene space to each input image space in-order to perform consistency checks (e.g. Voxel-Coloring, Space Carving). It is known that for every fixed set of N images (N < 1) there exist a family of shapes that would have some structure occluded from all cameras and therefore will not be rendered correctly from a novel view that can see the occluded points. A form of image disocclusion has also been attempted for over a decade. Recently, the term digital image inpainting has been introduced to describe the task of filling-in holes of arbitrary topology in images. A commonly adapted image model is the BV (Bounded Variations) model, which lets us reason about existence of minimizers for a-priori continuation functionals of the image function. A reasonable approach is to diffuse the color values from the hole boundary into the hole along isophotes. Additional techniques include texture inpainting and even inpainting in 3D, but results are still preliminary.**[Wang and Gu, 2005]**

Author(s): Yong Wang and Qian-Ping Gu.

Title: . Grooming of symmetric traffic in unidirectional SONET/WDM rings.

Technical Report: TR 2005-15, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2005.

Abstract: In SONET/WDM networks, a wavelength channel is shared by multiple low-rate traffic demands. The multiplexing is known as*traffic grooming*and realized by SONET add-drop multiplexers (SADM). The*grooming factor*is the maximum number of low-rate traffic demands which can be multiplexed in one wavelength. Since SADMs are expensive, a key optimization problem in traffic grooming is to minimize the number of SADMs. The optimization problem is challenging and NP-hard even for the unidirectional SONET/WDM ring networks with symmetric unitary traffic demands. In this paper, we propose an algorithm for this NP-hard problem. For a set R of symmetric pairs of unitary traffic demands on a SONET ring with n nodes, and a grooming factor of k, our algorithm grooms R into lceil frac |R|k rceil wavelengths using at most lceil (1+ frac 1k)|R| rceil + lfloor frac n4 rfloor SADMs. It can be proved that there exists an instance whose optimal solution requires as many as (1+ frac 1k)|R|+ frac |R|2k SADMs, which is very close to our upper bound. For the guaranteed performance, our algorithm achieves a better approximation ratio than previous ones. Our algorithm uses the minimum number of wavelengths which are also precious resources in optical networks. In addition, the experimental results show that our algorithm has much better practical performance than the previous algorithms in most cases.**[Yu et al., 2005]**

Author(s): Hui Yu, Jian Pei, Shiwei Tang, and Dongqing Yang.

Title: . Mining most general multidimensional summarization of probable groups in data warehouses.

Technical Report: TR 2005-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2005.

Abstract: Data summarization is an important data analysis task in data warehousing and online analytic processing. In this paper, we consider a novel type of summarization queries, probable group queries, such as 'What are the groups of patients that have a 50% or more opportunity to get lung cancer than the average?' An aggregate cell satisfying the requirement is called a probable group. To make the answer succinct and effective, we propose that only the most general probable groups should be mined. For example, if both groups (smoking, drinking) and (smoking, *) are probable, then the former groups should not be returned. The problem of mining the most general probable groups is challenging since the probable groups can be widely scattered in the cube lattice, and do not present any monotonicity in group containment order. We extend the state-of-the-art BUC algorithm to tackle the problem, and develop techniques and heuristics to speed up the search. An extensive performance study is reported to illustrate the effect of our approach.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Bergner and Drew, 2004]**

Author(s): Steven Bergner and Mark S. Drew.

Title: . Spatiotemporal-chromatic structure of natural scenes.

Technical Report: TR 2004-11, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2004.

Abstract: We investigate the implications of a unified spatiotemporal-chromatic basis for compression and reconstruction of image sequences. Different adaptive methods (PCA and ICA) are applied to generate basis functions. While typically such bases with spatial and temporal extent are investigated in terms of their correspondence to human visual perception, here we are interested in their applicability to multimedia encoding. The performance of the extracted spatiotemporal-chromatic patch bases is evaluated in terms of quality of reconstruction with respect to their potential for data compression. The results discussed here are intended to provide another path towards perceptually-based encoding of visual data by examining the interplay of chromatic features with spatiotemporal ones in data reduction.**[Bergner et al., 2004]**

Author(s): Steven Bergner, Torsten Möller, and Mark S. Drew.

Title: . A spectral engine for visualization.

Technical Report: TR 2004-06, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, August 2004.

Abstract: Full spectra allow the generation of physically correct rendering of a scene under different lighting conditions. Here, a tool is developed to design a palette of lights and materials having certain properties, such as selective metamerism or colour constancy. The mathematical underpinnings of the tool consist of two parts. Firstly, we make use of a method for optimally characterizing full spectra of lights, surfaces, and transmissioive materials by forming eigenvectors of sets of these and then transforming to an intermediate space in which spectral interactions reduce to simple component-wise multiplications. Secondly, we introduce appropriate optimization functions to create new sets of spectra from old, adding to sets of actual spectra so as to produce lights and reflectances that collude to generate spectral transfer function entries for bringing out or hiding parts of a graphics rendering.**[Bermond et al., 2004]**

Author(s): Jean-Claude Bermond, Afonso Ferreira, Stéphane Pérennes, and Joseph G. Peters.

Title: . Neighbourhood broadcasting in hypercubes.

Technical Report: TR 2004-12, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2004.

Abstract: In the broadcasting problem, one node needs to broadcast a message to all other nodes in a network. If nodes can only communicate with one neighbour at a time, broadcasting takes at least lceil log_{2}N rceil rounds in a network of N nodes. In the*neighbourhood broadcasting*problem, the node that is broadcasting only needs to inform its neighbours. In a binary hypercube with N nodes, each node has log_{2}N neighbours, so neighbourhood broadcasting takes at least lceil log_{2}log_{2}(N+1) rceil rounds. In this paper, we present asymptotically optimal neighbourhood broadcast protocols for binary hypercubes.**[Bian et al., 2004a]**

Author(s): Zhengbing Bian, Qian-Ping Gu, and Xiao Zhou.

Title: . Tight bounds for wavelength assignment on trees of rings.

Technical Report: TR 2004-05, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2004.

Abstract: A fundamental problem in communication networks is wavelength assignment (WA): given a set of routing paths on a network, assign a wavelength to each path such that the paths with the same wavelength are edge-disjoint, using the minimum number of wavelengths. The WA problem is NP-hard for a tree of rings network which is well used in practice. In this paper, we give an e cient algorithm which solves the WA problem on a tree of rings with an arbitrary (node) degree by at most 3L wavelengths and achieves an approximation ratio of 2.75, where L is the maximum number of paths on any link in the network. The 3L upper bound is tight that there are instances of the WA problem that require at least 3L wavelengths even on a tree of rings with degree four. We also give a 3L and 2-approximation algorithm for the WA problem on a tree of rings with degree at most six. Previous results include: 4L (resp. 3L) wavelengths for trees of rings with arbitrary degrees (resp. degree at most eight), and 2-approximation (resp. 2.5-approximation) algorithm for trees of rings with degree four (resp. six).**[Bian et al., 2004b]**

Author(s): Zhengbing Bian, Qian-Ping Gu, and Xiao Zhou.

Title: . Wavelength assignment on bounded degree trees of rings.

Technical Report: TR 2004-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, jan 2004.

Abstract: A fundamental problem in computer and communication networks is the wavelength assignment (WA) problem: given a set of routing paths on a network, assign wavelengths (channels) to the paths such that the paths with the same wavelength are edge-disjoint. The optimization problem here is to minimize the number of wavelengths. A popular network topology is a tree of rings which is a collection of rings connected in a tree structure such that any two rings are connected in at most one node and any two nodes are connected by exactly two edge-disjoint paths. It is known NP-hard to find the minimum number of wavelengths for the WA problem on a tree of rings. Let L be the maximum number of paths on any edge in the network. Then L is a lower bound on the number of wavelengths for the WA problem. We give a polynomial time algorithm which uses at most 3L wavelengths for the WA problem on a tree of rings with node degree at most eight. This improves the previous result of 4L. We also show that some instances of the WA problem require at least 3L wavelengths on a tree of rings, implying that the 3L upper bound is optimal for the worst case instances. In addition, we prove that our algorithm has approximation ratios 2 and 2.5 for a tree of rings with node degrees at most four and six, respectively.**[Farahbod et al., 2004]**

Author(s): R. Farahbod, U. Glässer, and M. Vajihollahi.

Title: . Abstract operational semantics of the business process execution language for web services.

Technical Report: TR 2004-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2004. This TR has been revised, please see the newest version in ftp://fas.sfu.ca/pub/cs/TR/2005/CMPT2005-04.pdf.

Abstract: We formally define an abstract operational semantics for the Business Process Execution Language for Web Services (BPEL), a forthcoming industrial standard for automated business processes. Based on the*abstract state machine*(ASM) paradigm, we model the dynamic properties of the key language constructs through the construction of a*BPEL abstract machine*in terms of partially ordered runs of distributed real-time ASMs. Specifically, we focus here on the*process execution model*and the underlying*execution lifecycle*of BPEL activities. The goal of our work is to provide a well defined semantic foundation for establishing the key language attributes by eliminating deficiencies hidden in the informal language definition. The resulting abstract machine model provides a comprehensive formalization of the BPEL dynamic semantics at various levels of abstraction including an executable formal semantics.**[Gu and Tamaki, 2004]**

Author(s): Qian-Ping Gu and Hisao Tamaki.

Title: . Optimal branch-decomposition of planar graphs in O(n^{3}) time.

Technical Report: TR 2004-14, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 2004.

Abstract: We give an O(n^{3}) time algorithm for constructing a minimum-width branch-decomposition of a given planar graph with n vertices. This is achieved through a refinement to the previously best known algorithm of Seymour and Thomas which runs in O(n^{4}) time.**[Letourneau and Liestman, 2004]**

Author(s): Michael J. Letourneau and Arthur L. Liestman.

Title: . Heuristics for generating additive spanners.

Technical Report: TR 2004-13, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2004.

Abstract: Given an undirected and unweighted graph G, the subgraph S is an additive spanner of G with delay d if the distance between any two vertices in S is no more than d greater than their distance in G. It is known that the problem of finding additive spanners of arbitrary graphs for any fixed value of d with a minimum number of edges is NP-hard. Additive spanners are used as substructures for communication networks which are subject to design constraints such as minimizing the number of connections in the network, or permitting only a maximum number of connections at any one node. We consider the problem of constructing good additive spanners. We say that a spanner is ``good'' if it contains few edges, but not necessarily a minimum number of them. We present several algorithms which, given a graph G and a delay parameter d as input, produce a graph S which is an additive spanner of G with delay d. We evaluate each of these algorithms experimentally over a large set of input graphs, and for a series of delay values. We compare the spanners produced by each algorithm against each other, as well as against spanners produced by the best-known constructions for those graph classes with known additive spanner constructions. We highlight several algorithms which consistently produce spanners which are good with respect to the spanners produced by the other algorithms, and which are nearly as good as or, in some cases, better than the spanners produced by the constructions. Finally, we conclude with a discussion of future algorithmic approaches to the construction of additive spanners, as well as a list of possible applications for additive spanners beyond the realm of communication networks.**[Obradovic et al., 2004a]**

Author(s): Nenad Obradovic, Joseph Peters, and Goran Ruzic.

Title: . Multiple communication trees of optimal circulant graphs with two chord lengths.

Technical Report: TR 2004-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2004.

Abstract: A collection of spanning trees of a graph is edge-disjoint if no two contain the same edge. Two or more spanning trees of a given graph G(V,E) are edge-independent (vertex-independent) if they are rooted at the same vertex r, and for every other u in V(G), the paths from r to u, one path in each tree, are internally edge-disjoint (respectively, vertex-disjoint). The study of edge-disjoint and independent spanning trees provides a foundation for the development of dynamic fault-tolerant algorithms for distributed computing networks. In this paper, we give constructions of multiple edge-disjoint and multiple independent spanning trees of optimal circulant graphs with two chord lengths. The heights of the edge-disjoint and edge-independent spanning trees are within small additive constants of the diameter. The height of the vertex-independent spanning trees is twice the diameter.**[Obradovic et al., 2004b]**

Author(s): Nenad Obradovic, Joseph Peters, and Goran Ruzic.

Title: . Reliable broadcasting in double loop networks.

Technical Report: TR 2004-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2004.

Abstract: Broadcast is the fundamental collective communication routine in which the same message is delivered from a single source to all the nodes in a network. The most efficient way to implement broadcast is through the construction of a broadcast tree. We introduce a revised definition of optimality for the i-port model broadcast tree based on fault-tolerance and we construct optimal broadcast trees for interconnection networks modelled by Double Loop Graphs for 1<=i<=4.**[Pei et al., 2004]**

Author(s): Jian Pei, Daxin Jiang, and Aidong Zhang.

Title: . On mining cross-graph quasi-cliques.

Technical Report: TR 2004-15, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, July 2004.

Abstract: Joint mining of multiple data sets can often discover interesting, novel, and reliable patterns which cannot be obtained solely from any single source. For example, in cross-market customer segmentation, a group of customers who similarly behave in multiple markets should be considered as a more coherent and more reliable cluster than clusters found in a single market. As another example, in bioinformatics, by joint mining of gene expression data and protein interaction data, we can find clusters of genes which show coherent expression patterns and also produce interacting proteins. Such clusters may be potential pathways. In this paper, we investigate a novel data mining problem, mining cross-graph quasi-cliques, which is generalized from several interesting applications such as cross-market customer segmentation and joint mining of gene expression data and protein interaction data. We build a general model, show why the complete set of cross-graph quasi-cliques cannot be found by previous data mining methods, and study the complexity of the problem. While the problem is difficult, we develop an efficient algorithm, Crochet, which exploits several interesting and effective techniques and heuristics to efficaciously mine cross-graph quasi-cliques. A systematic performance study is reported on both synthetic and real data sets. We demonstrate some interesting and meaningful cross-graph quasi-cliques in bioinformatics. The experimental results also show that algorithm Crochet is efficient and scalable.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Benkoczi and Bhattacharya, 2003]**

Author(s): Robert R. Benkoczi and Binay K. Bhattacharya.

Title: . The WMD 2-median problem on a tree with pozitive and negative weights.

Technical Report: TR 2003-13, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 12 2003.

Abstract: This paper looks at a generalization of the 2-median problem on trees, in which the vertices of the tree are assigned a weight not necessarily pozitive. In the k -median problem in a graph, one needs to find a subset of k vertices (medians) such that the sum of the weighted distances from every vertex to the closest median is minimized. In the classic setting, all weights are required to be positive. The generalization was proposed by Burkard et.al. in 1998. Here, we improve the 2-median algorithm of Burkard from O(n^{3}) to O(n h log^{2}n) time.**[Entezari et al., 2003]**

Author(s): Alireza Entezari, Niklas Röber, and Torsten Möller.

Title: . Wavelets on optimal sampling lattices for volumetric data.

Technical Report: TR 2003-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 04 2003.

Abstract: We exploit the theory of optimal sampling lattices in designing wavelets and filter banks for volumetric datasets. A true multidimensional (non-separable) filter bank is derived for the case of Haar wavelets and applied to various datasets for comparison with the corresponding separable multidimensional method. In contrast with traditional methods of applying the wavelet transform to volume data in a separable fashion, we propose a non-separable wavelet transform that yields the subsampled data on an optimal sampling lattice. This new non-separable filter bank allows for more accurate and efficient multi-resolution representation of the data. Having obtained a high quality subsampled data, we take advantage of methods that render the data directly from this optimal sampling lattice to get images that demonstrate the quality of the subsampled data.**[Farahbod et al., 2003]**

Author(s): R. Farahbod, U. Glässer, and M. Vajihollahi.

Title: . Specification and validation of the business process execution language for web services.

Technical Report: TR 2003-06, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2003.

Abstract: The definition of BPEL assumes an infrastructure for running Web services on some asynchronous communication architecture. The underlying computation model is characterized by its concurrent and reactive behavior, making it particularly difficult to predict dynamic system properties with a sufficient degree of detail and precision under all circumstances.**[Glässer and Gu, 2003]**

Author(s): Uwe Glässer and Qian-Ping Gu.

Title: . Formal description and analysis of a distributed location service for mobile ad hoc networks.

Technical Report: TR 2003-12, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, October 2003.

Abstract: In this paper, we define a*distributed abstract state machine*(DASM) model of the network or routing layer protocol for mobile ad hoc networks. In conjunction with the chosen routing strategy, we propose a distributed*logical topology based location service*(LTLS) protocol and give the formal description of this protocol on the DASM model. The high dynamics of mobile ad hoc networks require routing strategies substantially different from those used in static communication networks. A new strategy for such networks is*geographical routing*in which each network node can find its physical position via GPS or other navigation technologies. To send a packet, the sender needs to know the most recent physical position of the destination node. The position information is provided by*location service*protocols. The LTLS protocol has short message delivery delay, requires small routing tables, uses relatively few administrative packets, and has very good fault tolerant properties. Our goal in defining the network layer protocol in terms of a DASM model is twofold. First, we feel that the mathematical modeling paradigm of distributed abstract state machines provides an ideal formal basis for constructing and studying abstract requirements specifications of mobile ad hoc networks. Second, we intend to utilize the resulting formal behavior models as a basis for developing high level*executable*specifications serving as a platform for experimental validation and exploration of alternative design choices.**[Gu and Wang, 2003]**

Author(s): Qian-Ping Gu and Yong Wang.

Title: . Efficient algorithm for embedding hypergraphs in a cycle.

Technical Report: TR 2003-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2003.

Abstract: The problem of*Minimum Congestion Hypergraph Embedding in a Cycle (MCHEC)*is to embed the hyperedges of a hypergraph as paths in a cycle such that the maximum congestion (the maximum number of paths that use any single link in the cycle) is minimized. This problem has many applications, including minimizing communication congestions in computer networks and parallel computations. The MCHEC problem is NP-hard. We give a 1.8-approximation algorithm for the problem. This improves the previous 2-approximation results. The algorithm has the optimal O(mn) time for the hypergraph with m hyperedges and n nodes.**[Kameda and Sun, 2003]**

Author(s): Tiko Kameda and Richar Yi Sun.

Title: . Optimal truncated-harmonic windows scheduling for broadcast systems.

Technical Report: TR 2003-10, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2003.

Abstract: The**optimal windows scheduling problem**is defined by positive integers c, and w_{1}, w_{2}, …, where c is the number of slotted channels and {w_{i}} are**window**sizes. The transmission of a**page**occupies one**slot**in a channel, and a**schedule**assigns page i to slots such that it appears in every window of w_{i}slots (not necessarily in the same channel). This paper addresses a special case of this problem with the following constraints: (a) w_{i}= m+i-1 for a given positive integer constant m, (b) for j=1, 2, …, c, channel C_{j}is divided into s_{j}subchannels, each subchannel consisting of every s_{j}^{t}h slot of the channel, (c) each page must appear in one subchannel with a fixed period, and (d) all pages allocated to each subchannel and channel must be consecutive. The objective is to maximize the number of pages that can be scheduled in c channels, by choosing the optimal values for {s_{j}}, which depend on m. We analyze this dependency in detail and show that the optimal value of s_{j}is guaranteed to be one of roughly 0.6 sqrt p possible values. where p is the window size of the first page to be assigned to C_{j}. We thus can avoid time-consuming exhaustive search for the optimal {s_{j}}. In the second part of this paper, we first define two segment downloading policies (segment preloading and concurrent segment downloading) and then compare the bandwidth requirements of**Finite-Delay Pagoda Broadcasting**and**Hollermann-Holzscherer Boradcasting**schemes, computed using results from the first part, with those of some other VOD broadcasting schemes for a given**average**waiting time. It is shown that the concurrent downloading policy can in general lead to less bandwidth requirement for a given*average*waiting time, and in fact**Quasi-Harmonic Broadcasting**requires the least bandwidth among all the known schemes.**[Kameda and Wu, 2003]**

Author(s): Tiko Kameda and Shufang Wu.

Title: . A lossless VOD broadcasting scheme for VBR videos using available channel bandwidths.

Technical Report: TR 2003-09, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, August 2003.

Abstract: We propose a new loss-less broadcasting scheme for variable bit rate (VBR) encoded videos. This is the first scheme that can use existing channels (e.g., 56Kbs or 64Kbps channels), without any special customized multiplexing requirement. The bandwidth of the available channels is an input to our algorithm. This is in contrast with all other known schemes which require channels with bandwidth of the form either k b or b / k, where b (bits/second) is the display rate of the video and k is an integer, or some other arbitrary bandwidth. Given communication channels of fixed bandwidth, we divide each video into a number (n) of segments of different sizes, starting from the end of the video. What is left over is a short initial segment, which is cached in the user's set-top box ahead of time. All other n segments are broadcast on separate channels simultaneously. Using data from real videos, we compare the total bandwidth needed by our scheme with the theoretical minimum bandwidth required if n channels/segments are used but different (non-standard) bandwidth can be allocated to each channel to minimize the total bandwidth. The experimental results demonstrate that the bandwidth requirement of our scheme is within 1% of the optimum for all the five real videos that we tested, indicating that our scheme enables us to use ``off-the-shelf'' channels at little extra cost in terms of total bandwidth required.**[Kameda et al., 2003]**

Author(s): Tiko Kameda, M. Yamashita, and I. Suzuki.

Title: . On-line polygon search by a six-state boundary 1-searcher.

Technical Report: TR 2003-07, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2003.

Abstract: Polygon search is the problem of finding mobile intruders who move unpredictably in a polygonal region. In this paper we consider a special case of this problem, where only a single searcher with one flashlight (called a 1-searcher) is used. Moreover, the searcher is allowed to move only along the boundary of the polygon (boundary search). Our main result is that the movement of the searcher can be controlled by a finite-state machine with only six states. This automaton has no built-in information about the input polygon and, for any given polygon P, if P can be searched by any boundary 1-searcher at all, then this automaton will successfully search P, no matter where on the boundary of P it is initially placed. All information about P is acquired by the automaton ``on-line'', as it searches P.**[Lapierre et al., 2003]**

Author(s): Soleil Lapierre, Alireza Entezari, and Torsten Möller.

Title: . Practicalities of Fourier-domain fluid simulation.

Technical Report: TR 2003-11, Simon Fraser University, May 2003.

Abstract: Motivated by the reduced rendering cost of the Fourier Volume Rendering method, we construct a Navier-Stokes fluid flow simulation that operates entirely in the frequency domain. We show results from a practical implementation and compare with a spatial domain version and with Jos Stams FFT-based solver. We break down the simulation pipeline into its major components and evaluate the cost of each component in the spatial domain and the frequency domain. We present an analytical as well as an experimental analysis, that shows that Stams FFT-based solver is the best choice for practical applications even though it is not asymptotically best. Further we examine Fourier Volume Rendering as a viable rendering technique for volumetric fog. We introduce different ways of handling light, from the well-known linear light factorization to a novel focused light beam which achieves real-time performance.**[Obradovic et al., 2003]**

Author(s): Nenad Obradovic, Joseph Peters, and Goran Ruzic.

Title: . Minimum chromaticity of circulant graphs.

Technical Report: TR 2003-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2003.

Abstract: We determine families of circulant graphs for which each graph G=G^{c}(n;S) has χ(G) leq 3. In particular, we show that there exists an n_{0}such that forall n geq n_{0}, χ(G) leq 3 whenever S= left { s_{1},s_{2}, … ,s_{k}| s_{k}>s_{k-1}> cdots >s_{1}geq left lceil frac s_{k}2 right rceil geq 1 wedge s_{k}neq 2 cdot s_{1}right } or S={s_{1},s_{2}| s_{2}> s_{1}geq 1 wedge s_{2}neq 2s_{1}}. We also prove that χ(G) leq 3 for every recursive circulant graph G=RG^{c}(n;d), n=c cdot d^{m}, 1 < c leq d.**[Tory, 2003]**

Author(s): Melanie Tory.

Title: . Mental registration of 3D views and 2D orthographic views in orientation icon displays.

Technical Report: TR 2003-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 2003.

Abstract: Side-by-side 2D and 3D views of data ('orientation icon' displays) are becoming common in visualization tools. An empirical study was run to determine the difficulty of mentally registering (spatially relating) 2D orthographic views to 3D views of block objects in orientation icon displays. 2D view orientation (top, right, and front) and object complexity were the variables under investigation. 2D view orientation was found to affect time required for mental registration, with most people requiring more time for top views. Object complexity did not correlate well with mental registration time; however, complex geometry that was partially hidden did seem to increase task time.**[Wu and Kameda, 2003]**

Author(s): Shufang Wu and Tiko Kameda.

Title: . Lossless VBR video broadcasting with user bandwidth limit using uniform channels.

Technical Report: TR 2003-08, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 2003.

Abstract: This paper proposes a new Video-On-Demand (VOD) broadcasting scheme for variable bit rate (VBR) encoded videos, called Forward Segmentation with Equal Bandwidth (FSEB). For all practical purposes?FSEB solves the elusive problem of minimizing the server bandwidth for delivering VBR videos with no loss?given an upper bound on the wait time, a uniform upper bound on the channel bandwidths?and the number of channels that a user can access simultaneously. Like many other broadcasting schemes, it divides a video into segments, and broadcasts each segment periodically on a separate logical channel. The main difficulty in solving this problem is the fact that VBR frames have irregular sizes and it is difficult to fit whole frames into channels of fixed bandwidth c. The only other lossless scheme for VBR videos currently known that can limit the number of channels that a user can access simultaneously is StairCase Broadcasting (SCB), which is a heuristic. Experimental results with real videos reveal that FSEB in general requires less server bandwidth and user bandwidth than SCB.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Bergner et al., 2002]**

Author(s): Steven Bergner, Torsten Möller, and Mark Drew.

Title: . Spectral volume rendering.

Technical Report: TR 2002-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 03 2002.

Abstract: We describe a method for volume rendering using spectral representation of colour instead of RGB. Full spectra allow the generation of physically correct rendering of a scene under different lighting conditions. To make the method practicable, a new low-dimension subspace method is used to act as the carrier of spectral information. To fully exploit spectra, we introduce the technique of post-illumination to generate new spectral images for arbitrary light colours in real-time. As well, a tool is developed to design a palette of lights and materials having certain properties, such as selective metamerism or colour constancy. Applied to spectral transfer functions, different light colours can accentuate or hide specific qualities of the data. In conjunction with post-illumination this provides a new means for preparing data for visualization and forms a new degree of freedom for guided exploration of volumetric data sets.**[Carr et al., 2002]**

Author(s): Hamish Carr, Thomas Theussl, and Torsten Möller.

Title: . Isosurfaces on optimal regular samples.

Technical Report: TR 2002-13, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 2002.

Abstract: Volumetric samples on cubic lattices are less efficient than samples on body-centred cubic (bcc) lattices. In this paper we propose several algorithms to construct isosurfaces on bcc lattices. We introduce the marching Octahedral and modified marching Octahedra. We further devise and investigate the modified marching Hexahedra (mmh) for bcc lattices. Our experimental results show that isosurfaces generated using mmh are competitive with isosurfaces generated using marching cubes**[Comellas et al., 2002]**

Author(s): Francesc Comellas, Margarida Mitjana, and Joseph G. Peters.

Title: . Epidemics in small-world communication networks.

Technical Report: TR 2002-09, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, October 2002.

Abstract: The two main characteristics of small-world networks are strong local clustering, and small diameter. These two characteristics are desirable properties in communication networks since typical communication patterns show large amounts of local communication and a small amount of non-local communication that must be completed quickly. In this paper, we study variants of broadcasting that resemble the spread of computer viruses in networks. We also study the related problem of stopping an epidemic.**[Drew and Finlayson, 2002]**

Author(s): Mark S. Drew and Graham D. Finlayson.

Title: . Multispectral processing without spectra.

Technical Report: TR 2002-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 03 2002.

Abstract: It is often possible and desirable to reduce the dimensionality of variables describing data, e.g. when the original number of variables is large and when errors introduced by approximation can be tolerated. This is particularly the case with respect to spectral measurements of illuminants and surfaces. In computer graphics and computer vision, it is often the case that multiplications of whole spectra, component by component, must be carried out. For example, this is the case when light reflects from matter, or is transmitted through materials. This statement particularly holds for spectrally-based ray tracing or radiosity in graphics. There, many such multiplies must be carried out, making a full-spectrum method prohibitive. However, using full spectra is attractive because of the many important phenomena that can be modelled using the full physics only. Here we apply a method previously used in modeling RGB-based light propagation to a very different task: spectral multiplication. Previously, we had developed a ``spectral sharpening'' method to map camera or eye RGBs into a different basis, such that lighting change is better described in terms of a simple diagonal transform. Here, we apply this idea to ``sharpening'' the set of principal component vectors for a set of spectra. In the new basis, no information is lost, yet spectral multiplies can be accurately carried out in a space with much-reduced dimensionality. Although the method is quite general and can be applied in several different fields, as an illustration of its utility we show how the method of modeling spectral image formation applies to tasks in graphics.**[Goldman et al., 2002]**

Author(s): Alfredo Goldman, Joseph G. Peters, , and Denis Trystram.

Title: . Exchanging messages of different sizes.

Technical Report: TR 2002-08, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 2002.

Abstract: This paper deals with the study of the exchange of messages among a set of processors linked through an interconnection network. We focus on general, non-uniform versions of message exchange problems in asynchronous systems with a linear cost model and messages of arbitrary sizes. We extend previous complexity results to show that the general asynchronous problems are NP-complete. We present several heuristics and determine which algorithms are best suited to several parallel systems. We propose new algorithms that combine the advantages of some of the heuristics. We conclude with experiments and simulations of the algorithms and analyses of their performances.**[Kilthau et al., 2002]**

Author(s): Steven L. Kilthau, Mark S. Drew, and Torsten Möller.

Title: . Full search content independent block matching based on the fast fourier transform.

Technical Report: TR 2002-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 02 2002.

Abstract: In this paper, we present a new algorithm for solving the block matching problem which is independent of image content and is faster than other full-search methods. The method employs a novel data structure called the Windowed-Sum-Squared-Table, and uses the fast Fourier transform (FFT) in its computation of the sum squared difference (SSD) metric. Use of the SSD metric allows for higher peak signal to noise ratios than other fast block matching algorithms which require the sum of absolute difference (SAD) metric. However, because of the complex floating point and integer math used in our computation of the SSD metric, our method is aimed at software implementations only. Test results show that our method has a running time 13%-29% of that for the exhaustive search, depending on the size of the search range.**[Kurn, 2002]**

Author(s): Andrew Kurn.

Title: . Allocation of processors of two classes is np-complete.

Technical Report: TR 2002-07, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, Aug 2002.

Abstract: For a set of tasks with fixed start and end times, and for a set of processors of two classes, one strictly better than the other, the question, whether the tasks can be run on the processors, is strongly NP-complete.**[Mitrovic-Minic and Krishnamutri, 2002]**

Author(s): Snezana Mitrovic-Minic and Ramesh Krishnamutri.

Title: . The multiple traveling salesman problem with time windows: Bounds for the minimum number of vehicles.

Technical Report: TR 2002-11, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 2002.

Abstract: This paper deals with finding a lower and an upper bound for the minimum number of vehicles needed to serve all locations of the multiple traveling salesman problem with time windows. The two types of precedence graphs are introduced - the start-time precedence graph and the end-time precedence graph. The bounds are generated by covering the precedence graph with minimum number of paths. Instances for which bounds are tight are presented, as well as instances for which bounds can be arbitrary bad. The closeness of such instances is discussed.**[Peters et al., 2002]**

Author(s): Joseph G. Peters, Christophe Rapine, and Denis Trystram.

Title: . Small depth arc-disjoint spanning trees in two-dimensional toroidal meshes.

Technical Report: TR 2002-10, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 2002.

Abstract: In this paper we present a new method to construct maximal families of arc-disjoint spanning trees in two-dimensional symmetric directed toroidal meshes. For two-dimensional toroidal meshes with diameter D, our method produces four spanning trees of depth D+2 - only one greater than the optimum value D+1. The best previous results for this problem are families of four trees with depth 2D-2 and families of four minimum-depth trees for the special case of square 2-dimensional toroidal meshes.**[Shu et al., 2002]**

Author(s): Keith Shu, Colin Swindells, Denis Chai, , and Kori M. Inkpen.

Title: . Windowspaces to share our digital media.

Technical Report: TR 2002-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2002.

Abstract: In this paper we describe the WindowSpaces system, designed to facilitate the sharing of digital media between collocated individuals in an ad hoc face-to-face setting. Windowspaces promotes the seamless sharing of experiences embodied in digital media between multiple participants within dynamical contexts.**[Tory and Möller, 2002]**

Author(s): Melanie Tory and Torsten Möller.

Title: . A model-based visualization taxonomy.

Technical Report: TR 2002-06, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 06 2002.

Abstract: Frameworks for organizing literature and ideas in visualization are valuable since they allow us to gain a higher-level understanding of the state of visualization research. Current taxonomies of visualization techniques are problematic because the terminology is vague. We present a new taxonomy based on models of a data set rather than attributes of the data itself. This method addresses several problems with existing classification schemes and generates less ambiguous visualization categories.**[Tory and Swindells, 2002]**

Author(s): Melanie Tory and Colin Swindells.

Title: . Exovis: An overview and detail technique for volumes.

Technical Report: TR 2002-05, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 06 2002.

Abstract: Volume data sets can be difficult to visualize because of their large size and high data density. Methods to simultaneously visualize a global overview and local details have been developed for other data types, but have not been extensively explored for volume data. We present ``ExoVis'', a new ``overview and detail'' technique for volumes. ExoVis widgets are objects that display local slice and subvolume details outside or surrounding a three-dimensional overview of the data set. Multiple slices and subvolumes may be viewed simultaneously, each with unique orientation and rendering properties. Several new interaction techniques are possible within our framework, including new versions of cine mode and a new way to save and compare transfer functions. Our method also generates new possibilities for multivariate data visualization.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Cameron and Tatu, 2001]**

Author(s): Robert D. Cameron and Serban G. Tatu.

Title: . Bibliographic protocol level 1: Link resolution and metapage retrieval.

Technical Report: TR 2001-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 2001.

Abstract: BibP (bibliographic protocol) links bibliographic identifiers of published works to bibliographic services for those works. Identifiers follow the Universal Serial Item Name (USIN) scheme, providing a scholar-friendly conventional notation for journal articles, books and institutional publications, as well as a generic framework that can scale to identify documents in any organized collection. A hierarchical resolution model emphasizes bibliographic services available through local libraries backed up by publisher-specified and global services. Resolution is achieved through existing DNS technology coupled with appropriate client-side support. Deployment of BibP clients with most of the popular web browsers is possible today; this paper presents one such client, written in JavaScript.**[Kilthau and Möller, 2001]**

Author(s): Steven Kilthau and Torsten Möller.

Title: . Splatting optimizations.

Technical Report: TR 2001-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, June 2001.

Abstract: We present a highly optimized implementation of the splatting algorithm that achieves near real-time rendering rates. The optimizations are achieved through numerical approximation techniques, use of triangular shaped splats, the exploitation of current hardware OpenGL extensions, as well as an optimized data structure based on run length encoding. The strength of this paper is an exhaustive analysis of the bottlenecks of the splatting algorithm and our results should therefore be easy to adapt to any specific rendering platform. Using the optimizations presented, we gain speedups on the order of 5-10 times over traditional, recently reported implementations.**[Shoemaker, 2001]**

Author(s): Garth B. D. Shoemaker.

Title: . Single display groupware research in the year 2000.

Technical Report: TR 2001-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 04 2001.

Abstract: Users of computer systems are often best served by having the fewest constraints imposed on their interaction styles. This allows users to choose whatever interaction style they deem to be the most effective in any particular circumstance. The recent growth in Single Display Groupware research is a sign that researchers believe the one-person one-display computer usage paradigm may be overly restrictive, to the detriment of the user. It is believed that allowing more than one user to synchronously interact with one display may be desirable in certain circumstances. This paper first looks at the major technological breakthroughs in the field of Single Display Groupware. Then, an examination is performed of research outside of the field that is deemed to be highly relevant. Following this, a summary of the major published works in the area of Single Display Groupware is performed. Finally, conclusions as to the next logical steps to be taken in Single Display Groupware research are drawn.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Fong and Cameron, 2000]**

Author(s): Philip W. L. Fong and Robert D. Cameron.

Title: . Java proof linking with multiple classloaders.

Technical Report: TR 2000-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, August 2000. (PostScript)

Abstract: The Proof Linking Architecture was proposed as a framework for conducting modular verification in the presence of lazy, dynamic linking. The model was instantiated to modularize Java bytecode verification in a simplified Java run-time environment, in which there was a single classloader. This paper analyzes the interaction between proof linking and lazy, dynamic linking in the setting of multiple classloaders. It turns out that a systematic, straightforward set of extensions to the original model is sufficient to make proof linking work with multiple classloaders. This demonstrates that the proof linking idea is applicable to realistic mobile code environments.**[Kuederle, 2000]**

Author(s): Oliver Kuederle.

Title: . Applying task observations to improve MRI presentation.

Technical Report: TR 2000-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 2000.

Abstract: In Magnetic Resonance Imaging (MRI), anatomical structures are visualized by scanning successive slices of the human body. Traditionally, radiologists use a large light screen display to view MRI volume sets. This paper describes a video mock-up and a resulting software prototype developed to display MRI volume sets on a traditional computer monitor. Five key objectives were identified concerning the design of the software prototype.**[Orchard and Möller, 2000]**

Author(s): Jeff Orchard and Torsten Möller.

Title: . Accelerated splatting using a 3D adjacency data structure.

Technical Report: TR 2000-05, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 2000.

Abstract: We introduce a new acceleration to the standard splatting volume rendering algorithm. Our method achieves full colour (32-bit), depth-sorted and shaded volume rendering significantly faster than standard splatting. The speedup is due to a 3-dimensional adjacency data structure that efficiently skips transparent parts of the data and stores only the voxels that are potentially visible. Our algorithm is very robust and flexible, allowing for depth sorting of the data, including correct back-to-front ordering for perspective projections. This makes interactive splatting possible for applications such as medical visualizations that rely on structure and depth information.**[Shoemaker and Inkpen, 2000]**

Author(s): Garth B. D. Shoemaker and Kori M. Inkpen.

Title: . MIDDesktop: An application framework for single display groupware investigations.

Technical Report: TR 2000-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 2000.

Abstract: It has become apparent that the design of computers needlessly imposes a limitation of one user per machine. Research in the area of Single Display Groupware (SDG) explores how to best support simultaneous interactions of multiple users with a single, shared display [7]. This paper presents a framework that allows for SDG investigations without requiring the development of custom SDG software applications. This is an important development because the software required to investigate SDG related topics is often complicated and time consuming to develop.**[Suderman and Delgrande, 2000]**

Author(s): Matthew Suderman and James Delgrande.

Title: . Considerations on compositional update operators.

Technical Report: TR 2000-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 2000.

Abstract: We develop two specific operators for modifying a knowledge base: Amend updates a knowledge base with a formula, while the complementary operator Forget removes a formula from the knowledge base. The approach differs from other belief change operators in that the definition of the operators is compositional with respect to the sentence added or removed. For example, Forget applied to a conjunction is defined to be the union of Forget applied to the individual conjuncts. In some cases the resulting approach has better complexity characteristics than other operators; however, this is at the expense of irrelevance of syntax and other of the Katsuno-Mendelzon update postulates. Alternately, we achieve full irrelevance of syntax if the sentence for update is replaced by the disjunction of its prime implicants and conversely the sentence for removal is replaced by the conjunction of its prime implicates. The approach is illustrated with examples and through comparison with other approaches. The approach is interesting because it is founded on differing intuitions than other operators, in that it is based on compositional concerns. Second, the approach allows a straightforward, and in some cases more efficient, implementation.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Benkoczi and Bhattacharya, 1999]**

Author(s): Robert R. Benkoczi and Binay K. Bhattacharya.

Title: . Spine tree decomposition.

Technical Report: TR 1999-09, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 10 1999.

Abstract: Many practical problems on trees require datastructures to efficiently extract information from these trees. A common approach to the problem is to use recursion combined with tree decomposition. The centroid tree decomposition has been extensively used for this purpose. We propose a different approach, a spine tree decomposition that is, in our opinion, more intuitive, faster to compute and gives almost always better results when this data structure is used in centroid problems instead of a centroid decomposition. Particularly, we look into two problems for which a centroid decomposition is used. We show that the centroid decomposition has a major disadvantage over our decomposition and that people are forced to circumvent it using high constant factor algorithms, or even brute-force. Moreover, our decomposition is self adapting, giving the best results when used on unbalanced trees and having the same performance with the centroid decomposition on balanced trees.**[Cowperthwaite et al., 1999]**

Author(s): David J. Cowperthwaite, M. Sheelagh T. Carpendale, and F. David Fracchia.

Title: . Editing in elastic presentation spaces.

Technical Report: TR 1999-11, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, October 1999.

Abstract: Recent years have seen the development of increasingly more general distortion viewing methods in response to the screen real-estate problem. While these methods have provided a means of displaying more information within the limitations of the computer screen, the very distortions that they introduce present a problem in subsequent interaction with the data. Distortions of the information space are arbitrarily complex and analytical solutions to perform a backwards mapping are correspondingly convoluted. We present here a general solution to the problem of determining the origin of a point in a distorted information space that does not rely on the construction of an analytical backwards mapping. This method relies on the congruent distortion of an encoded map space along with the information space and provides a simple means of determining the original position of any point within a distorted space.**[Fall and Fall, 1999]**

Author(s): Andrew Fall and Joseph Fall.

Title: . Beauty and the beast: Separating specification from implementation for models of landscape dynamics.

Technical Report: TR 1999-05, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 1999.

Abstract: Spatially explicit models of landscape structure and change are becoming common tools for complimenting empirical studies, evaluating management strategies, and developing theory in landscape ecology. In many cases, such models are inseparable from the computer programs used to implement them. When the underlying model is hidden in the details of a program, it is difficult to verify, modify or adapt the model, make comparisons between models, or integrate two or more models. One of the goals of SELES (Spatially Explicit Landscape Event Simulator) has been to separate the specification of model behaviour from the mechanics of implementing such a model on a computer. SELES models are specified in a high-level, structured modelling language that frees landscape modellers from programming, allowing them to focus on the underlying model rather than the implementation details. The SELES simulation engine executes the model, converting the high-level specification into a computer simulation of landscape change. The SELES language is declarative, and thus permits a clear representation of the underlying model, which, in turn, yields models that are more easily verified, compared, modified, and reused. SELES also provides a structured framework that facilitates rapid model prototyping and guides the development of a broad class of spatial landscape models.**[Fong and Cameron, 1999]**

Author(s): Philip W. L. Fong and Robert D. Cameron.

Title: . Proof linking: Modular verification of mobile programs in the presence of lazy, dynamic linking.

Technical Report: TR 1999-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 1999. (PostScript)

Abstract: Although mobile code systems typically employ run-time code verifiers to secure host computers from potentially malicious code, implementation flaws in the verifiers may still leave the host system vulnerable to attack. Compounding the inherent complexity of the verification algorithms themselves, the need to support lazy dynamic linking in mobile code systems typically leads to architectures that exhibit strong interdependencies between the loader, the verifier and the linker. To simplify verifier construction and provide improved assurances of verifier integrity, we propose a modular architecture based on the concept of proof linking. This architecture encapsulates the verification process and removes dependencies between the loader, the verifier, and the linker. We also formally model the process of proof linking and establish properties to which correct implementations must conform. As an example, we instantiate our architecture for the problem of Java bytecode verification and assess the correctness of this instantiation. Finally, we briefly discuss alternative mobile code verification architectures enabled by the proof linking concept.**[Hadley, 1999]**

Author(s): Robert F. Hadley.

Title: . Cognition and the computational power of connectionist networks.

Technical Report: TR 1999-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 1999.

Abstract: This paper examines certain claims of ``cognitive significance'' which (wisely or not) have been based upon the theoretical powers of three distinct classes of connectionist networks, namely, the ``universal function approximators'', recurrent finite-state simulation networks, and Turing equivalent networks. Each class will here be considered with respect to its potential in the realm of cognitive modeling. Regarding the first class, I argue that, contrary to the claims of some influential connectionists, feed-forward networks do not possess the theoretical capacity to approximate all functions of interest to cognitive scientists. For example, they cannot approximate many important, simple recursive (halting) functions which map symbolic strings onto other symbolic strings. By contrast, I argue that a certain class of recurrent networks (i.e., those which closely approximate deterministic finite automata, DFA) shows considerably greater promise in some domains. However, serious difficulties arise when we consider how the relevant recurrent networks (RNNs) could acquire the weight vectors needed to support DFA simulations. Indeed, the most widely used method of inducing weight vectors (supervised learning) is shown to be implausible in the realm of several high-level cognitive functions. Furthermore, hypotheses founded upon innate-wiring and self-organizing learning likewise encounter serious obstacles. This is not to say that a set of separate connectionist modules would present similar obstacles. However, a set of RNN modules is not an RNN in the sense assumed by the various theoretical proofs discussed herein. In addition, the class of Turing equivalent networks is here examined. It is argued that the relevance of such networks to cognitive modeling is seriously undermined by their reliance on infinite precision in crucial weights and/or node activations. I also examine what advantage these networks might possess over and above classical symbolic algorithms. For, from a cognitive standpoint, the Turing equivalent networks present difficulties very similar to certain classical algorithms; they appear highly contrived, their structure is fragile, and they exhibit little or no noise-tolerance.**[Hafer, 1999a]**

Author(s): L. Hafer.

Title: . bonsaiG: Algorithms and design.

Technical Report: TR 1999-06, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, August 1999. (PostScript)

Abstract: This report describes the implementation of bonsaiG, a program for mixed-integer linear programming (MILP). bonsaiG is a research code, designed to explore the utility and power of arc consistency as a general technique for solving MILP problems, and to provide a foundation for exploring other techniques. It strives to provide maximum flexibility, control, and robustness, while retaining a reasonable level of efficiency. It implements a LP-based branch-and-bound algorithm and supports binary, general integer, and continuous variables. The underlying LP is an implementation of a dynamic LP algorithm.**[Hafer, 1999b]**

Author(s): L. Hafer.

Title: . bonsaiG: User's manual.

Technical Report: TR 1999-07, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, August 1999. (PostScript)

Abstract: User's Manual for the bonsaiG MILP code.**[Han et al., 1999]**

Author(s): Jiawei Han, Jian Pei, and Yiwen Yin.

Title: . Mining partial periodicity using frequent pattern trees.

Technical Report: TR 1999-10, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, July 1999.

Abstract: Mining partial periodic patterns in time-series databases may disclose interesting relationships among events. Such mining may expect to discover many short as well as long frequent patterns. The classical Apriori algorithm is efficient for mining short frequent patterns but is quite costly at mining long patterns. The MaxMiner-like algorithm is good for mining long frequent max-patterns, but does not work efficiently for mining a mixture of long and short patterns. The MaxHitSet algorithm developed in our previous research performs well if there are not too many distinct patterns but encounters difficulties at handling a large number of frequent patterns. In this study, we propose a novel Frequent Pattern Tree (FPtree) structure, which is an extended tree structure storing compressed, crucial information about frequent patterns, and develop two efficient FPtree based algorithms, FPall and FPmax, for mining frequent periodic all-patterns and frequent periodic max-patterns respectively. With the FPtree structure, the mining of long and short frequent patterns, as well as frequent max-patterns can be performed efficiently. Our experimental and performance studies show that the FPtree based mining outperforms in a wide margin all of the three previously proposed methods suitable for mining frequent periodic patterns in time-series databases.**[Kurn, 1999]**

Author(s): Andrew Kurn.

Title: . Allocation of strictly ordered processors is NP-complete.

Technical Report: TR 1999-13, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 1999.

Abstract: For a set of tasks with fixed start and end times, and for a well ordered set of processors, the question, whether the tasks can be run on the processors, is NP-complete.**[Li and Yang, 1999]**

Author(s): Sheng Li and Qiang Yang.

Title: . ActiveCBR: Integrating case-based reasoning and active databases.

Technical Report: TR 1999-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 1999.

Abstract: Case-based reasoning (CBR) is an Artificial Intelligence (AI) technique for problem solving that uses previous similar examples to solve a current problem. Despite its success, most of current CBR systems are passive: they require human users to activate them manually and to provide information about the incoming problem explicitly. In this paper, we present an integrated system that combines CBR system with an active database system. Active databases, with the support of active rules, can perform event detecting, condition monitoring, and event handling (action execution) in an automatic manner. The combined ActiveCBR system consists of two layers. In the lower layer, the active database is rule-driven; in the higher layer, the result of action execution of active rules is transformed into feature-value pairs required by the CBR subsystem. The layered architecture separates case-based reasoning from complicated rule-based reasoning, and improves the traditional*passive*CBR system with the*active*property. Advantages of the combined ActiveCBR system include flexibility in knowledge management that an active database system lacks, and having the CBR system autonomously respond to external events that a passive CBR system lacks. This work, which is a first attempt to combine the two types of systems, contributes to both database research and CBR research in that it allows the combined knowledge base to be highly autonomous, well scalable, and easily changeable. We demonstrate the system efficiency and effectiveness through empirical tests. %these advantages through empirical tests.**[Sun et al., 1999]**

Author(s): Yinlong Sun, David F. Fracchia, Mark S. Drew, , and Thomas W. Calvert.

Title: . Rendering iridescent colors of optical disks.

Technical Report: TR 1999-08, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 1999.

Abstract: The iridescent colors observed on objects such as optical disks are caused by light diffraction from their surface microstructure. We show that when an optical disk is illuminated by a light source it typically displays a major highlight strip of colors diagonally across the disk, consisting of saturated colors that vary along the strip, and a pair of weak strips of unsaturated colors that change across the strip. In this report, we propose a diffractive illumination model and use it to render such iridescent colors. Our model is based completely on the physical structure of optical disks and includes contributions due to both diffractive and non-diffractive factors. For the diffractive part, we first model the pit periodicity for optical disks by using identical spheres and we then approximate their distribution within tracks by uniform groups of spheres. Utilizing this representation, we carry out analytic, self-contained calculations from first principles based on the superposition principle of light waves, and derive the distribution of light intensity due to diffraction. We also propose and prove the condition for highlights on illuminated grooved surfaces; this condition provides the non-diffractive contribution within our model. The model successfully accounts for the complex behavior of optical disks — rendered images achieve excellent agreement with corresponding photographs of real disks. While this report specifically focuses on optical disks, the key idea in our illumination model is generally applicable to various diffractive structures.**[Tauber et al., 1999]**

Author(s): Zinovi Tauber, Ze-Nian Li, and Mark S. Drew.

Title: . Locale-based object search under illumination change.

Technical Report: TR 1999-12, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 1999.

Abstract: Providing a user with an effective image search engine has been a very active research area. A search by an object model is considered to be one of the most desirable and yet difficult tasks. An added difficulty is that objects can be photographed under different lighting conditions. We have developed a feature localization scheme that finds a set of*locales*in an image. Locales are defined to be overlapping regions of an image. We make use of a diagonal model for illumination change and obtain a candidate set of lighting transformation coefficients in chromaticity space. For each pair of coefficients,*Elastic Correlation*is performed, which is a form of correlation of locale colors. Elastic Correlation also yields a candidate assignment of image locales to search object locales. A weighted Mean-Square-Error minimization for pose estimation is then applied, followed by an efficient process of texture histogram intersection and a Generalized Hough Transform, since the rotation, scale and translation parameters have been recovered. Tests on a database of over 1,400 images and video clips show promising image retrieval results.**[Yang and Wu, 1999]**

Author(s): Qiang Yang and Jing Wu.

Title: . Interactive case based reasoning with case clustering and decision forests.

Technical Report: TR 1999-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 1999.

Abstract: In interactive case based reasoning, it is important to present a small number of important cases and problem features to the user at one time. This goal is difficult to achieve when large case bases are common place in industrial practice. In this paper we present our solution to the problem by highlighting the interactive user-interface component of caseadvisor system. In caseadvisor , decision forests are created in real time to help compress a large case base into several small ones. This is done by merging similar cases together through a novel clustering algorithm. An important side effect of this operation is that it allows up-to-date maintenance operations to be performed for case base management. During the retrieval process, an information-guided subsystem can then generate decision forests based on users' current answers obtained through an interactive process. Action steps are suggested to the user in an incremental manner, and results of the actions are used to formulate the next questions and suggestions. Possible questions to the user are carefully analyzed through information theory. An important feature of the system is that case-base maintenance and reasoning are integrated in a seamless whole. In this article we present the system architecture, algorithms as well as empirical evaluations that support our claims.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Allen et al., 1998]**

Author(s): Tim Van Allen, James Delgrande, and Arvind Gupta.

Title: . Point-based approaches to qualitative temporal reasoning.

Technical Report: TR 1998-16, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, sep 1998.

Abstract: We are interested in the general problem of qualitative, point-based temporal reasoning. That is, given a database of assertions concerning the relative positions of points in some time frame, we are interested in various operations on this database. Issues include compiling the assertions into a representation that supports efficient reasoning, determining whether a database is consistent, computing the strongest entailed relation between two points, computing the deductive closure of all assertions, and updating the set of assertions. We begin by considering a general set of operations and their corresponding algorithms, applicable to general point-based temporal domains. We also consider special-purpose reasoners, which may be expected to perform well in a temporal domain with a particular restricted structure. To this end we discuss the notion of a metagraph and present a reasoner based on series-parallel graphs. We also analyse and revise the TimeGraph reasoner of Gerevini and Schubert. Lastly, we present a comparison of different approaches, based on running times for different data sets.**[Cameron, 1998a]**

Author(s): Robert D. Cameron.

Title: . REX: XML shallow parsing with regular expressions.

Technical Report: TR 1998-17, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 1998.

Abstract: The syntax of XML is simple enough that it is possible to parse an XML document into a list of its markup and text items using a single regular expression. Such a shallow parse of an XML document can be very useful for the construction of a variety of lightweight XML processing tools. However, complex regular expressions can be difficult to construct and even more difficult to read. Using a form of literate programming for regular expressions, this paper documents a set of XML shallow parsing expressions that can be used a basis for simple, correct, efficient, robust and language-independent XML shallow parsing. Complete shallow parser implementations of less than 50 lines each in Perl, JavaScript and Lex/Flex are also given.**[Cameron, 1998b]**

Author(s): Robert D. Cameron.

Title: . Scholar-friendly DOI suffixes with JACC: Journal article citation convention.

Technical Report: TR 1998-08, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 1998.

Abstract: JACC (Journal Article Citation Convention) is proposed as an alternative to SICI (Serial Item and Contribution Identifier) as a convention for specifying journal articles in DOI (Digital Object Identifier) suffixes. JACC is intended to provide a very simple tool for scholars to easily create Web links to DOIs and to also support interoperability between legacy article citation systems and DOI-based services. The simplicity of JACC in comparison to SICI should be a boon both to the scholar and to the implementor of DOI responders.**[Cukierman, 1998]**

Author(s): Diana Cukierman.

Title: . Temporal reasoning in artificial intelligence: A survey, with focus on repetition and calendar formalization.

Technical Report: TR 1998-06, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 1998.

Abstract: This article surveys formal representation and reasoning about time in the area of Artificial Intelligence (AI), briefly including closely related work from the Temporal Database (TDB) community, as well as work from related application areas. Special emphasis is given to Allen's Interval Algebra and subalgebras,analyzing the different expressive power and computational characteristics.Furthermore, the focus of this survey is on the formalization of temporal repetition and of calendars and calendar-based temporal objects.**[Cukierman and Delgrande, 1998a]**

Author(s): Diana Cukierman and James Delgrande.

Title: . Expressing time intervals and repetition within a formalization of calendars.

Technical Report: TR 1998-05, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 1998.

Abstract: We investigate a formal representation of time units, calendars, and time unit instances as restricted temporal entities for reasoning about repeated activities. We examine characteristics of time units, and provide a categorization of the hierarchical relations among them. Hence we define an abstract hierarchical unit structure (a calendar structure) that expresses specific relations and properties among the units that compose it. Specific objects in the time line are represented based on this formalism, including non-convex interval corresponding to repeated activities. A goal of this research is to be able to represent and reason efficiently about repeated activities.**[Cukierman and Delgrande, 1998b]**

Author(s): Diana Cukierman and James Delgrande.

Title: . Towards a formal characterization of temporal repetition with closed time.

Technical Report: TR 1998-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 1998.

Abstract: We propose a novel approach to formally characterize temporalrepetition. This differs from what has appeared so far in the AI/temporal reasoning literature, where time and particularly temporal repetition are represented within a linear structure. We propose to model temporal objects representing repetition with a closed time structure. Based on convex intervals and Allen's convex relations we define a new temporal object, the time loop. Such an object captures the core of what is repeated and which relations hold between each repetition in one cycle. Hence, this formalism allows one to concisely represent for example the scheduling of regular meetings in a University, to specify calendars and to represent repetitive processes such as those occurring in an assembly line.**[Fertin and Peters, 1998]**

Author(s): Guillaume Fertin and Joseph G. Peters.

Title: . Optimal odd gossiping.

Technical Report: TR 1998-24, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 1998.

Abstract: In the gossiping problem, each node in a network starts with a unique piece of information and must acquire the information of all other nodes using two-way communications between pairs of nodes. In this paper we investigate gossiping in n-node networks with n odd. We use a linear cost model in which the cost of communication is proportional to the amount of information transmitted. In synchronous gossiping, the pairwise communications are organized into rounds, and all communications in a round start at the same time. We present optimal synchronous algorithms for all odd values of n. In asynchronous gossiping, a pair of nodes can start communicating while communications between other pairs are in progress. We provide a short intuitive proof that an asynchronous lower bound due to Peters, Raabe, and Xu is not tight.**[Fong, 1998]**

Author(s): Philip W. L. Fong.

Title: . Viewer's discretion: Host security in mobile code systems.

Technical Report: TR 1998-19, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 1998. (PostScript)

Abstract: Mobile code computation is a new paradigm for structuring distributed systems. Mobile programs migrate from remote sites to a host, and interact with the resources and facilities local to that host. This new mode of distributed computation promises great opportunities for electronic commerce, mobile computing, and information harvesting. There has been a general consensus that security is the key to the success of mobile code computation. In this paper, we survey the issues surrounding the protection of a host from potentially hostile mobile programs. Decades of research in operating systems has provided significant experience and insight into the nature of system security. Before we propose any new security model for mobile code systems, it is wise to question why the existing protection mechanisms found in distributed operating systems do not fully address the security needs of mobile code systems. We propose three security challenges that are distinctive of the mobile code phenomenon, namely, the establishment of anonymous trust (establishing trust with programs from unfamiliar origin), layered protection (establishing fine-grained protection boundaries among mutually-distrusting parts of the same process), and implicit acquisition (the implicit nature of mobile program acquisition). We also survey various approaches to protection in existing mobile code systems. We classify protection approaches into four categories: discretion, verification, transformation, and arbitration. We evaluate each category by looking at how well they address the security needs of mobile code computation.**[Grün, 1998]**

Author(s): Gabrielle Assunta Grün.

Title: . New forms of association rules.

Technical Report: TR 1998-15, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 1998.

Abstract: The problem under consideration is association analysis, the discovery of association rules describing attribute value conditions that generally occur together. The standard form of association rules is implication, X Longrightarrow Y, where X and Y are subsets of an itemset from a set of transactions with the empty intersection or more generally X and Y are conjunctions of attribute value pairs or predicates. Several new forms of association rules are proposed and motivated, including bidirectional or equality rules and mutually exclusive rules. Algorithms for the discovery of these rules are described. A discussion of rules with disjunctive antecedents and/or consequents and rules with negated antecedents or consequents is given as well. A performance study and further analysis of the effectiveness of the additional forms of association rules are left as future work.**[Hadley, 1998]**

Author(s): Robert F. Hadley.

Title: . Connectionism and novel combinations of skills: Implications for cognitive architecture.

Technical Report: TR 1998-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 1998.

Abstract: In their renowned 1988 paper, Fodor and Pylyshyn disputed certain bold claims then issuing from the pens of some prominent connectionists. At that time, there were many who heralded the emergence of connectionism as a new paradigm – one which would eventually displace the classically symbolic methods then dominant in AI and Cognitive Science. At present, there remain influential connectionists who believe that certain of Fodor's and Pylyshyn's crucial arguments underestimated the power of distributed representations. Moreover, some continue to defend connectionism as a more realistic paradigm for modelling cognition*at all levels of abstraction*than the classical algorithmic methods of AI. Not infrequently, one encounters arguments along these lines: given what we know about neurophysiology, it is just not plausible to suppose that our brains possess an architecture anything like classical von Neumann machines. Our brains are not digital computers, and so, cannot support a classical architecture. In this paper, I advocate a middle ground. I assume, for argument's sake, that some form(s) of connectionism can provide reasonably approximate mental models – at least for lower-level cognitive processes. Given this assumption, I argue on theoretical and empirical grounds that most human mental skills must reside in*separate*connectionist modules or ``sub-networks''. Ultimately, it is argued that the basic tenets of connectionism, in conjunction with the fact that humans often employ*novel combinations*of skill modules in rule following and problem solving, lead to the*plausible conclusion*that, in certain domains, high level cognition requires some form of classical architecture. During the course of argument, it emerges that only an architecture with classical structure could support the novel patterns of*information flow*and interaction that would exist among the relevant set of modules. Such a classical architecture might very well reside in the abstract levels of a hybrid system whose lower-level modules are purely connectionist.**[Hafer, 1998a]**

Author(s): L. Hafer.

Title: . CONSYS: A dynamic constraint system.

Technical Report: TR 1998-22, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 1998. (PostScript)

Abstract: CONSYS is a constraint system utility package tailored for large, sparse constraint systems with frequent addition and deletion of constraints and variables. It is specialised for use in a linear or integer programming environment. CONSYS makes use of an underlying package of vector utility functions which handle expanded and packed vectors.**[Hafer, 1998b]**

Author(s): L. Hafer.

Title: . DYLP: A dynamic lp code.

Technical Report: TR 1998-23, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 1998. (PostScript)

Abstract: DYLP is a linear programming code implementing a dynamic simplex algorithm similar to that outlined by Padberg in `Linear Optimisation and Extensions'. The code attempts to maintain a reduced active constraint system by regularly purging loose constraints and variables with unfavourable reduced costs, and adding violated constraints and variables with favourable reduced costs. In abstract, the code alternates between primal and dual simplex algorithms, using dual simplex to reoptimise after updating the constraint set and primal simplex to reoptimise after updating the variable set.**[Hafer, 1998c]**

Author(s): L. Hafer.

Title: . A note on the relationship of primal and dual simplex.

Technical Report: TR 1998-21, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 1998. (PostScript)

Abstract: Details theoretical and mechanical correspondences between primal and dual simplex of interest when implementing the revised dual simplex algorithm with bounded primal variables.**[Hayden, 1998]**

Author(s): S. Hayden.

Title: . Engineering coordinated multi-agent systems.

Technical Report: TR 1998-20, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, August 1998.

Abstract: This paper surveys the current state of the art in agent-oriented software engineering, focusing on the area of coordinated multi-agent systems. In multi-agent systems, the interactions between the agents are crucial in determining the effectiveness of the system. Hence the adoption of an appropriate coordination mechanism is pivotal in the design of multi-agent system architectures. This paper does not focus on agent theory, rather on the development of an agent-oriented software engineering methodology, collaboration architectures and design patterns for collaboration. Important issues are those of agent modeling and methodologies for large-scale complex systems; coordination theory, models and langauges; planning in the coordination process; the reuse of coordination models and patterns; and learning new models as an adaptation to dynamic, changing worlds. Connections to recent ideas in software architecture, design patterns and coordination theory are explored. A catalog of coordination patterns inherent in multi-agent architectures is presented. Such patterns may be utilized in the architectural design stage of an agent-oriented software engineering methodology.**[Lantin and Carpendale, 1998]**

Author(s): M. Lantin and M. S. T. Carpendale.

Title: . Supporting detail-in-context for the DNA representation, H-curves.

Technical Report: TR 1998-09, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 1998.

Abstract: This paper describes a tool for visual exploration of long DNA sequences represented as H-curves. Although very long DNA sequences can be plotted using H-curves, micro-features are lost as sequences get longer. Using an approach inspired by a visualization technique described in [Carpendale:3DP95], we present a new three-dimensional distortion algorithm to allow the magnification of a sub-segment of an H-curve while preserving a global view of the curve. This is particularly appropriate for H-curves as they provide useful visual information at several resolutions. Our detail-in-context technique creates a non-occluding 3D magnification ``lens'' which allows roving searches by magnifying the H-curve as the lens is moved along the curve's length. We tested our application on several sequences and found that it significantly improved our ability to discover areas of interest. We believe that the capability of zooming while retaining context will greatly enhance the usefulness of H-curves as a means of displaying and comparing long DNA sequences generated by numerous sequencing projects.**[Li et al., 1998]**

Author(s): Ze-Nian Li, Osmar R. Zaiane, and Bing Yan.

Title: . C-BIRD: Content-based image retrieval in digital libraries using illumination invariance and recognition kernel.

Technical Report: TR 1998-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 1998.

Abstract: With huge amounts of multimedia information connected to the global information network Internet, efficient and effective image retrieval from large image and video repositories has become an imminent research issue. This article presents our research in the C-BIRD (Content-Based Image Retrieval in Digital-libraries) project. In addition to the use of common features such as color, texture, shape and their conjuncts, and the combined content-based and description-based techniques, it is shown that (a) the color–channel–normalization enables Search by Illumination Invariance, and (b) the multi-level recognition kernel facilitates Search by Object Model in image and video databases.**[Mitra and Hafer, 1998]**

Author(s): P. Mitra and L. Hafer.

Title: . Efficient computation of the medial axis.

Technical Report: TR 1998-07, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 1998. (PostScript)

Abstract: In this paper we present an efficient algorithm to compute the medial axis of convex polyhedra in any fixed dimension. We do this by establishing the duality of the medial axis of a convex polyhedron in d dimensions and the half-space intersection in d+1 dimensions. The running time of our algorithm to compute the medial axis for a convex polyhedron in d geq 3 dimensions is O(n^{ lceil d/2 rceil }) when the input is the set of n, d-dimensional half-spaces whose intersection defines the convex polyhedron and d is fixed. The algorithm is optimal for d=3,*i.e.*, its time complexity is θ ( n^{2}). Also the algorithm is optimal for every fixed even dimension.**[Mitrovic-Minic, 1998]**

Author(s): Snezana Mitrovic-Minic.

Title: . Pickup and delivery problem with time windows: A survey.

Technical Report: TR 1998-12, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 1998. (PostScript)

Abstract: This paper is a survey about the time constrained pickup and delivery problems and the methods used to solve them. The pickup and delivery problem is a problem of finding a set of optimal routes for a fleet of vehicles in order to serve transportation requests. Each transportation request is defined by a pickup location, a delivery location and a load. If the pickup and/or delivery location has a time interval within which it should be visited, the problem is known as the pickup and delivery problem with time windows. This time restricted version of the problem is the most researched one, because almost all pickup and delivery problems in practice deal with some time restrictions. Practical problems that can be modeled as pickup and delivery problems are: dial-a-ride problems, handicapped person transportation problems, courier company pickup and delivery problems, ... This survey covers the most recent optimization methods and heuristics used for solving the pickup and delivery problem with time windows.**[Sun et al., 1998]**

Author(s): Yinlong Sun, F. David Fracchia, and Mark S. Drew.

Title: . A composite model for representing spectral functions.

Technical Report: TR 1998-18, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 1998. (PostScript)

Abstract: In this report we propose a new model to represent spectral functions called the composite model. This model is built on the idea that we decompose all spectral functions into smooth and spiky components, each with its own distinct representation. Specifically, a smooth spectrum can be expressed as a linear combination of a set of given basis functions and thus be represented through the corresponding coefficients. The discrete spikes in a spiky spectral function can be represented directly through their locations and heights. For the smooth spectra, we propose re-sampling the functions that are reconstructed from the coefficients in the linear combination to improve efficiency. Spectral multiplications are thus greatly reduced in complexity. This new model demonstrates remarkable advantages in representing spectral functions with aspect to accuracy, compactness, computational efficiency, portability, and flexibility. We expect that this work will significantly contribute to the research and applications in color science, realistic image synthesis, and color image analysis.**[Tatu and Scurtescu, 1998]**

Author(s): Serban Tatu and Marius Scurtescu.

Title: . Resolving URNs with java and javascript.

Technical Report: TR 1998-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 1998. (PostScript)

Abstract: This paper presents a solution allowing authors of HTML documents to make use of Uniform Resource Names (URNs) instead of Uniform Resource Locators(URLs). We describe the design and implementation of a client-server architecture for URN resolution, based on Java applets and CORBA servers. The process is carried out transparently for the user of such documents by a scheme using JavaScript. This scheme takes over the resolution of links in the browser and redirects it to the client applet, which in turn obtains the resolution information from the CORBA server. The main advantage of this approach is the easiness of integration with the current browser technology: pages containing URN's can run in any browser supporting Java and JavaScript.**[Walenstein, 1998]**

Author(s): Andrew Walenstein.

Title: . Towards a framework for describing benefits of software comprehension tools.

Technical Report: TR 1998-14, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, August 1998.

Abstract: A frequently expressed reason for empirically studying programmers and maintainers is to understand software comprehension activities well enough to facilitate the design of helpful tools. Models of software comprehension have been proposed, but translating these into practical design knowledge has been, in practice, exceedingly difficult. This paper proposes a method for using existing comprehension models to generate design goals and to clarify design issues. A crucial part of this method is a scheme for explaining the value of tool features by relating them to expected improvements in task performance. This is argued to be an important intellectual asset for designers. It enables the analysis and comparison of several categories of software comprehension tools according to how they are perceived to assist comprehension. The implications this method has for reasoning about how to support software comprehension are outlined.**[Wei and Li, 1998a]**

Author(s): Jie Wei and Ze-Nian Li.

Title: . Camera motion recovery from videos with foveate wavelet transform.

Technical Report: TR 1998-11, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 1998.

Abstract: In this paper, a new variable resolution technique – Foveate Wavelet Transform (FWT) is proposed to represent video frames in an effort to reduce the bandwidth/storage overhead through emulating the animate visual systems. As its application, an FWT-based method purporting to recover camera motions from video clips is developed. With this method, the motion vectors are first estimated for the FWT representation of each frame using a foveate two-pass algorithm. The camera motions, namely, pan, tilt, zooming, and a mixture of them, are recovered next based on the behaviors of the dense motion fields. The efficacy of the FWT is further delineated.**[Wei and Li, 1998b]**

Author(s): Jie Wei and Ze-Nian Li.

Title: . On active camera control with foveate wavelet transform.

Technical Report: TR 1998-10, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 1998.

Abstract: In this paper, a new variable resolution technique — Foveate Wavelet Transform (FWT) is proposed to represent video frames in an effort to emulate the animate visual systems. Compared to the existing variable resolution techniques, the strength of the proposed scheme encompasses its flexibility and conciseness while supporting interesting behaviors resembling the animate vision system. With the FWT as the representation, an efficient scheme to achieve purposive vision is developed. The Foveate Potential Motion Area is first labeled, the camera motion is then determined by following the labeled moving object. Experiments based on our computer-controlled pan-tilt-zoom camera have demonstrated its efficacy in real-time active camera control.**[Zaiane, 1998]**

Author(s): Osmar R. Zaiane.

Title: . From resource discovery to knowledge discovery on the internet.

Technical Report: TR 1998-13, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, August 1998.

Abstract: More than 50 years ago, at a time when modern computers didn't exist yet, Vannevar Bush wrote about a multimedia digital library containing human collective knowledge and filled with ``trails'' linking materials of the same topic. At the end of World War II, Vannevar urged scientists to build such a knowledge store and make it useful, continuously extendable and more importantly, accessible for consultation. Today, the closest to the materialization of Vannevar's dream is the World-Wide Web hypertext and multimedia document collection. However, the ease of use and accessibility of the knowledge described by Vannevar is yet to be realized. Since the 60s, extensive research has been accomplished in the information retrieval field, and free-text search was finally adopted by many text repository systems in the late 80s. The advent of the World-Wide Web in the 90s helped text search become routine as millions of users use search engines daily to pinpoint resources on the Internet. However, resource discovery on the Internet is still frustrating and sometimes even useless when simple keyword searches can convey hundreds of thousands of documents as results. Knowledge Discovery and Data Mining from the Web is a new promising research topic that is attracting tremendous interest. Is this what will help realize Vannevar's dream?

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Bartram, 1997]**

Author(s): Lyn Bartram.

Title: . Perceptual and interpretative properties of motion for information visualization.

Technical Report: TR 97-15, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, October 1997.

Abstract: Visualizing information in user interfaces to complex, large-scale systems is difficult due to an enormous amount of dynamic data distributed across multiple displays. While graphical representation techniques can reduce some of the cognitive overhead associated with comprehension, current interfaces suffer from the over-use of such representation techniques and exceed the human's perceptual capacity to efficiently interpret them. New display dimensions are required to support the user in information visualization. Three major issues which are problematic in complex system UI design are identified: representing the nature of change, supporting the cognitive integration of data across disparate displays, and conveying the nature of relationships between data and/or events. Advances in technology have made animation a viable alternative to static representations. Motion holds promise as a perceptually rich and efficient display dimension but little is known about its attributes for information display. This paper proposes that motion may prove useful in visualizing complex information because of its preattentive and interpretative perceptual properties. A review of animation in current user interface and visualization design and research indicates that, while there is strong intuition about the 'usefulness' of motion to communicate, there are few guidelines or empirical knowledge about how to employ it. This paper summarizes types of movement characterization from diverse disciplines and proposes an initial taxonomy of motion properties and application to serve as a framework for further empirical investigation into motion as a useful display dimension. Implementation issues are discussed with respect to real-time display requirements.**[Beck and Jackson, 1997]**

Author(s): J. Christopher Beck and W. Ken Jackson.

Title: . Constrainedness and the phase transition in job shop scheduling.

Technical Report: TR 97-21, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 1997.

Abstract: Recent research has shown the existence of a "phase transition" for many combinatorial problems. In this paper we investigate the existence of a phase transition in the job shop scheduling problem. We apply standard estimates for the constrainedness of constraint satisfaction problems to job shop problems and describe empirical results identifying a phase transition at a constrainedness level of approximately 0.2. This phase transition point is much lower than expected, both from theory and in comparison with the phase transition point empirically found for other problems. We argue that this discrepancy is due to the weakness of the independence assumption used in the estimation of constrainedness.**[Cameron, 1997]**

Author(s): Robert D. Cameron.

Title: . Towards universal serial item names.

Technical Report: TR 97-16, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 3 1997.

Abstract: Universal Serial Item Names (USINs) are proposed for the global and permanent identification of the archivable serial publications of humanity. Requirements for USINs are analyzed with an emphasis on the use of USINs in scholarly communication. A uniform naming model is described based on the hierarchical naming of serial publications and the hierarchical numbering of serial items. A number of concrete design ideas for USIN syntax are presented. A USIN Global Registry and a USIN Global Database are proposed and analyzed in terms of specific architectural features that interact to meet the requirements of publishers, librarians and scholars. Applications of the USIN concept to literature research, document retrieval, bibliography preparation and addressing the "broken links" problem of the World-Wide Web are considered.**[Carpendale et al., 1997]**

Author(s): M. Sheelagh T. Carpendale, David J. Cowperthwaite an d Margaret-Anne D. Storey, and F. David Fracchia.

Title: . Exploring distinct aspects of the distortion viewing paradigm.

Technical Report: TR 97-08, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 1997.

Abstract: This paper separates multi-scale or distortion viewing techniques into their component parts. Advantages of applying these distinct components separately to graph layout are considered.**[Delgrande, 1997]**

Author(s): J. P. Delgrande.

Title: . Incorporating relevance in logics of conditionals.

Technical Report: TR 97-12, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, September 1997.

Abstract: We consider the notion of relevance in default reasoning in Artificial Intelligence. First, the role of relevant properties in default reasoning is addressed, along with a discussion of how they have been addressed in various approaches to nonmonotonic reasoning. Second, a specific approach to defeasible reasoning based on the incorporation of irrelevant properties in a conditional knowledge base is presented. A closure operation is presented, so that from a theory of default conditionals an extension is obtained wherein irrelevant properties are satisfactorily incorporated. Two characterisations are given: a fixed-point definition and a pseudo-iterative definition. Lastly, other closure operations are briefly examined, such as incorporating ``transitivities'' among defaults; not surprisingly these closure operations correspond exactly with respect to the set of default conclusions obtained from incorporating relevant properties.**[Drew et al., 1997a]**

Author(s): Mark S. Drew, Jie Wei, and Ze-Nian Li.

Title: . Illumination–invariant color object recognition via compressed chromaticity histograms of normalized images.

Technical Report: TR 97-09, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, April 1997.

Abstract: Several color object recognition methods that are based on image retrieval algorithms attempt to discount changes of illumination in order to increase performance when test image illumination conditions differ from those that obtained when the image database was created. Here we extend the seminal method of Swain and Ballard in a natural fashion to discount changing illumination. The new method is based on the first stage of the simplest color indexing method, which uses angular invariants between color image and edge image channels. That method first normalizes image channels, and then effectively discards much of the remaining information. Here we simply adopt the normalization stage as an adequate color constancy step. The resulting new method is more accurately illuminant–invariant than previous methods. Further, we replace 3D color histograms by 2D chromaticity histograms. Treating these as*images*, we implement the method in a compressed histogram–image domain using a combination of wavelet compression and Discrete Cosine Transform (DCT) to fully exploit the technique of low–pass filtering for efficiency. Results are very encouraging, with substantially better performance than other methods tested. The method is also fast, in that the indexing process is entirely carried out in the compressed domain and uses a feature vector of only 36 or 72 values.**[Drew et al., 1997b]**

Author(s): Mark S. Drew, Jie Wei, and Ze-Nian Li.

Title: . On illumination invariance in color object recognition.

Technical Report: TR 97-07, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, March 1997.

Abstract: Several color object recognition methods that are based on image retrieval algorithms attempt to discount changes of illumination in order to increase performance when test image illumination conditions differ from those that obtained when the image database was created. Here we investigate under what general conditions illumination change can be described using a simple linear transform among RGB channels, for a multi–colored object, and adduce a different underlying principle than that usually suggested. The resulting new method, the Linear Color algorithm, is more accurately illuminant–invariant than previous methods. An implementation of the method uses a combination of wavelet compression and DCT transform to fully exploit the technique of low–pass filtering for efficiency. Results are very encouraging, with substantially better performance than other methods tested. The method is also fast, in that the indexing process is entirely carried out in the compressed domain and uses a feature vector of only 63 integers.**[Fraigniaud and Peters, 1997]**

Author(s): Pierre Fraigniaud and Joseph G. Peters.

Title: . Structured communication in cut-through routed torus networks.

Technical Report: TR 97-05, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 1997.

Abstract: This paper is a study of one-to-all and all-to-all data movement patterns on cycles and multi-dimensional toroidal meshes that use cut-through (e.g., circuit-switched, wormhole) routing with virtual channels. We present new cut-through algorithms for broadcasting, scattering, gossiping, and multi-scattering on cycles and multi-dimensional torus networks. Our algorithms use virtual channels to improve performance by controlled use of channel conflicts. We compare our algorithms to the best known store-and-forward algorithms, cut-through algorithms, and lower bounds. In most cases, we conclude that cut-through routing is best when messages are ``short'' and store-and-forward routing is best for long messages. Some of our new algorithms outperform all known algorithms for all message lengths.**[Fujita et al., 1997]**

Author(s): Satoshi Fujita, Stephane Perennes, and Joseph G. Peters.

Title: . Neighbourhood gossiping in hypercubes.

Technical Report: TR 97-03, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 1997.

Abstract: In the neighbourhood gossiping problem, each node of a network starts with a unique message and must learn the messages of all of its neighbours. In this paper, we prove upper and lower bounds for neighbourhood gossiping in hypercubes under the single-port half-duplex and single-port full-duplex communication models.**[Gaur et al., 1997]**

Author(s): Daya Ram Gaur, W. Ken Jackson, and William S. Havens.

Title: . Detecting unsatisfiable CSPs by coloring the micro-structure.

Technical Report: TR 97-01, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 1997. (PostScript)

Abstract: Constraint satisfaction research has focussed on consistency checking using k-consistency and its variations such as arc-consistency, and path-consistency. We define a new form of consistency checking that is based on coloring the micro-structure graph of a constraint satisfaction problem (CSP). In our formulation, if the micro-structure graph of a CSP with n variables can be colored with n-1 colors then the problem is unsatisfiable. This new notion of consistency by coloring is compared to arc-consistency. We provide examples that show that neither arc-consistency nor consistency by coloring is more powerful than the other in a theoretical sense. We also provide preliminary experimental evidence on random CSPs that suggests that consistency by coloring can detect unsatisfiability slightly more often than arc-consistency.**[Gerhard, 1997]**

Author(s): David Bruce Gerhard.

Title: . Computer music analysis.

Technical Report: TR 97-13, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, October 1997.

Abstract: Computer music analysis is investigated, with specific reference to the current research fields of automatic music transcription, human music perception, pitch determination, note and stream segmentation, score generation, time-frequency analysis techniques, and musical grammars. Human music perception is investigated from two perspectives: the computational model perspective desires an algorithm that perceives the same things that humans do, regardless of how the program accomplishes this, and the physiological model perspective desires an algorithm that models exactly how humans perceive what they perceive.**[Hadley and Cardei, 1997]**

Author(s): Robert F. Hadley and Vlad C. Cardei.

Title: . Acquisition of the active-passive distinction from sparse input and no error feedback.

Technical Report: TR 97-04, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 1997. (also published as CSS-IS TR97-01).

Abstract: A connectionist-inspired, parallel processing network is presented which learns, on the basis of (relatively) sparse input, to assign meaning interpretations to novel test sentences in both active and passive voice. Training and test sentences are generated from a simple recursive grammer, but once trained, the network successfully processes thousands of sentences containing deeply embedded clauses. All training is unsupervised with regard to error feedback - only Hebbian and Kohonen forms of training are employed. In addition, the active-passive distinction is acquired without any supervised provision of cues or flags (in the output layer) that indicate whether the input sentence is in active or passive voice. In more detail: (1) The model learns on the basis of a corpus of about 1000 sentences while the set of potentional test sentences contains over 100 million sentences. (2) The model generalizes its capacity to interpret active and passive sentences to substantially deeper levels of clausal embedding. (3) After training, the model satisfies criteria for strong syntactic and strong semantic systematicity that humans also satisfy. (4) Symbolic message passing occurs within the model's output layer. This symbolic aspect reflects certain prior langauge acquisition assumptions.**[Han et al., 1997]**

Author(s): Jiawei Han, Micheline Kamber, and Jenny Chiang.

Title: . Mining multi-dimensional association rules using data cubes.

Technical Report: TR 97-06, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, February 1997.

Abstract: Methods for mining association rules have been studied extensively. However, most previous studies have been confined to the mining of single dimensional and single variable association rules. There are applications in relational databases and data warehouses which require the mining of multi-dimensional association rules. In this paper, we study efficient methods for mining multi-dimensional association rules using a data cube structure, a popular data structure used in data warehouses. Efficient algorithms are developed for mining multi-dimensional association rules by either using an existing data cube, when available, or construction of a data cube on the fly. In both cases, the algorithms outperform the direct application of a table-based Apriori algorithm to the mining of multi-dimensional association rules. The extension of the method for mining multi-level, multi-dimensional association rules and meta-rule guided mining is also discussed in the paper.**[Irgens, 1997a]**

Author(s): Morten Irgens.

Title: . Constructive heuristics for satisfiability problems.

Technical Report: TR 97-18, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 1997.

Abstract: The satisfiability problem is the problem of finding a model for a propositional theory in conjunctive normal form. Satisfiability problems can be solved using search. Most constructive search algorithms, like the class of backtrack algorithms, guarantee completeness, and is of this reason a popular choice for many applications. Backtrack algorithms profit from heuristics for ordering the set of possible choices in each step. Good choice heuristics can reduce the size of search trees with several orders of magnitude. This paper continues a discussion in J. N. Hooker and V. Vinay: Branching rules for satisfiability (*Journal of Automated Reasoning,*volume 15, 1995, pages 359-383) on constructive heuristics for sat, and based on Rajeev Motwani and Prabhakar Raghvan: Randomized Algorithms (*Cambridge University Press*, 1995, ISBN 0-521-47465-5) presents a constructive heuristic that guarantees polynomial run times on identified subclasses of the problem.**[Irgens, 1997b]**

Author(s): Morten Irgens.

Title: . Why simulated annealing works and why it doesn't.

Technical Report: TR 97-17, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 1997.

Abstract: Probabilistic sampling can be seen as a method for solving optimisation problems, given that a suitable probability distribution has been defined over its state space. It is natural to use Markov chains techniques for such sampling. Under certain conditions a nonstationary Markov chain (consisting of states in the solution space of the problem) asymptotically converges in probability to a global minimum. Simulated Annealing is a weak method for solving optimisation problems based on probabilistic sampling. This report presents Simulated Annealing as a probabilistic sampling method. The theory that gives a probabilistic guarantee of optimal solutions is described and examples are given. This explains why Simulated Annealing should work. However, the last part also shows us where this breaks down, and henceforth when Simulated Annealing doesn't work. The paper discusses the theory behind Simulated Annealing in contrast to real life applications of the algorithm, and its relationship to static noise level algorithms and the 'No Free Lunch Theorem for Search'.**[Irgens and Gaur, 1997]**

Author(s): Morten Irgens and Daya Ram Gaur.

Title: . A condition for solvability of constraint satisfaction problems.

Technical Report: TR 97-20, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 1997.

Abstract: This paper presents an application of Lovász local lemma in order to characterize some classes of constraint satisfaction problems. These results are very general, hence we expect the properties to be weak. We show that this is the case on some subclasses.**[Irgens and Havens, 1997]**

Author(s): Morten Irgens and William S. Havens.

Title: . An overview of arc consistency algorithms.

Technical Report: TR 97-19, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, November 1997.

Abstract: A constraint network is a collection of variables and constraints on these variables. The consistency enforcement problem is the problem of increasing the overall consistency in a constraint network. Since enforcing full consistency is an NP-hard problem, many sub classes have been defined. Most notably among these is*arc consistency*for binary constraint networks. Arc consistency is usually enforced using constraint propagation algorithms. The published algorithms have varied a lot in style. This report gives a uniform presentation of these. We believe this could be of help for implementors and researchers in the field.**[Jackson et al., 1997]**

Author(s): W. Ken Jackson, William S. Havens, and Harry Dollard.

Title: . Staff scheduling: A simple approach that worked.

Technical Report: CMPT97-23, School of Computing Science, Simon Fraser University, 1997.

Abstract: This paper describes our experiences in solving a real staff scheduling problem. We experimented with a variety of approaches that included: studying the structure of the problem as list-coloring in interval graphs, using constraint-based methods, using greedy algorithms, and using a variety of iterative improvement approaches. In the end, a very simple randomized greedy algorithm proved able to generate good schedules using very little computational resources.**[Kamber et al., 1997]**

Author(s): Micheline Kamber, Jiawei Han, and Jenny Y. Chiang.

Title: . Using data cubes for metarule-guided mining of multi-dimensional association rules.

Technical Report: TR 97-10, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 1997.

Abstract: Metarule-guided mining is an interactive approach to data mining, where users probe the data under analysis by specifying hypotheses in the form of metarules, or pattern templates. Previous methods for metarule-guided mining of association rules have primarily used a transaction/relation table-based structure. Such approaches require costly, multiple scans of the data in order to find all the large itemsets. In this paper, we employ a novel approach to metarule-guided, multi-dimensional association rule mining which explores a data cube structure. We propose algorithms for metarule-guided mining using different cube structures: given a metarule containing p predicates, we compare mining on an n-dimensional cube structure (where p < n) with mining on smaller multiple p-dimensional cubes. In addition, we propose an efficient method for precomputing the cube, which takes into account the constraints imposed by the given metarule. Our preliminary performance study shows that a cube-based metarule-guided algorithm for mining multi-dimensional association rules is efficient and can easily be extended to the mining of multi-level, multi-dimensional association rules.**[Lantin, 1997]**

Author(s): Maria Lantin.

Title: . Computer simulations of developmental processes.

Technical Report: TR 1997-24, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, May 1997.

Abstract: Developmental models have been used extensively by biologists to elucidate the mysteries of developmental processes. This paper summarizes the different computer models put forward and their relation to the mechanisms believed responsible for biological development.**[Pantel, 1997]**

Author(s): Christian Pantel.

Title: . A framework for comparing web-based learning environments.

Technical Report: TR 97-14, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, 1997. Master's Thesis.

Abstract: As notions such as the information revolution and knowledge-based economies become increasingly realistic, there is considerable pressure on academic, corporate and governmental decision makers to improve both the accessibility and the quality of learning opportunities that they provide to those they serve. Many are turning to World Wide Web-based technologies as part of the solution. A Web-based learning environment is a networked computer application that enables people to learn from a distance. Learners can be physically separated from teachers and from each other, and they can participate in the learning environment at their convenience. Choosing which Web-based learning environment to adopt is an important decision as it often involves a substantial investment for organizations and signi cantly impacts how people will learn. In order to better understand how people learn and the possibilities for Web-based learning environments to support learning, we review the relevant educational theory. We also review the human-computer interaction literature to better understand how people can e ectively use software. Intuitive and ad-hoc comparisons between competing products are not likely to lead to adopting the optimal Web-based learning environment for a particular organization. Therefore, we propose a comparison framework for Web-based learning environments which is based primarily on educational theory and human-computer interaction research. The comparison framework consists of a large number of comparison dimensions covering a broad range of issues relevant to the adoption of a Web-based learning environment. The comparison framework serves two main audi- ences. The primary audience consists of decision makers of organizations considering the adoption of a Web-based learning environment. We propose a methodology that these decision makers can follow when using the comparison framework to guide them in selecting the most appropriate Web-based learning environment for their organi- zation. The secondary audience consists of developers of Web-based learning envi- ronments. By examining the comparison dimensions and by reading the literature reviews, developers may identify strengths and weaknesses of their product which can be useful in marketing and in planning future product releases.**[Peters and Spencer, 1997]**

Author(s): Joseph G. Peters and Curtis C. Spencer.

Title: . Global communication on circuit-switched toroidal meshes.

Technical Report: TR 97-02, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, January 1997.

Abstract: In this paper, we investigate the uses of virtual channels and multiple communication ports to improve the performance of global communication algorithms for cycles and multi-dimensional toroidal meshes. We use a linear cost model to compare the performances of the best single-port algorithms for broadcasting, scattering, gossiping, and multi-scattering with algorithms that can use multiple ports simultaneously. We conclude that the use of multiple ports can enhance performance for all four communication patterns when propagation costs are dominant and virtual channels can reduce the total start-up costs. The two mechanisms interact to produce different types of trade-offs for the different communication patterns.**[Racine and Yang, 1997]**

Author(s): Kirsti Racine and Qiang Yang.

Title: . Redundancy and inconsistency detection in large and unstructured case bases.

Technical Report: TR 97-11, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, July 1997.

Abstract: With the dramatic proliferation of case based reasoning systems in commercial applications, many case bases are now becoming legacy systems. They represent a significant portion of an organization's assets, but they are large and difficult to maintain. One of the contributing factors is that these case bases are often large and yet unstructured; they are represented in natural language text. Adding to the complexity is the fact that the case bases are often authored and updated by different people from a variety of knowledge sources, making it highly likely for a case base to contain redundant and inconsistent knowledge. In this paper, we present methods and a system for maintaining large and unstructured case bases. We focus on two difficult problems in case-base maintenance: redundancy and inconsistency detection. These two problems are particularly pervasive when one deals with an unstructured case base. We will discuss both algorithms and a system for solving these problems. As the ability to contain the knowledge acquisition problem is of paramount importance, our methods allow one to express relevant domain expertise for detecting both redundancy and inconsistency naturally and effortlessly. Empirical evaluations of the system prove the effectiveness of the methods in several large domains.**[Wei and Li, 1997]**

Author(s): Jie Wei and Ze-Nian Li.

Title: . An efficient two-pass MAP-MRF algortihm for motion estimation by use of mean field theory.

Technical Report: TR 97-22, School of Computing Science, Simon Fraser University, Burnaby, BC, Canada, December 1997.

Abstract: This paper presents a two-pass algorithm for estimating motion vectors from image sequences. In the proposed algorithm, the motion estimation is formulated as a problem of obtaining the Maximum A Posteriori in the Markov Random Field (MAP-MRF). An optimization method based on the Mean Field Theory (MFT) is opted to conduct the MAP-search. The estimation of motion vectors is modeled by only two MRF's, namely, the motion vector field and unpredictable field. Instead of utilizing the line field, a truncation function is introduced to take care of the discontinuity between the motion vectors on neighboring sites. In this algorithm, a "double threshold" preprocessing pass is first employed to partition the sites into three regions, whereby the ensuing MFT-based pass for each MRF is conducted on one or two of the three regions. With this algorithm, no significant difference exists between the block-based and pixel-based MAP-searches any more. Consequently, a nice compromise between precision and efficiency can be struck with ease. In order to render our algorithm more resilient against the gross errors, the Mean Absolute Difference (MAD) instead of Mean Square Error (MSE) is selected as the measure of difference which is more reliable according to the knowledge of robust statistics. This is supported by our experimental results from both synthetic and real-world image sequences. The proposed two-pass algorithm is much faster than any other MAP-MRF motion estimation methods reported in the literature so far.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Adhikary, 1996]**

Author(s): Junas Adhikary.

Title: . Integration of constraint reasoning and simulation models in forest harvest scheduling.

Technical Report: CMPT96-04, School of Computing Science, Simon Fraser University, November 1996.

Abstract: Forest harvest planning and scheduling problem is an important part of forest resource management. It is a complex task requiring expertise and integration of multi-disciplinary fields. We review mathematical programming as applied to forest harvesting problem. The review suggests that long term scheduling is difficult because of the prohibitive size of the model. Such long term scheduling allows one to see if sustainable forestry is actually being practiced. Thus, this method alone is not sufficient for sustainable forest harvesting. In this paper we propose how constraint reasoning techniques from Artificial Intelligence may be used along with simulation modelling to solve forest harvesting problem effectively. We show how these techniques can be used to better model forest harvest plans and describe our work in progress towards building better forest harvest scheduling system.**[Cameron, 1996]**

Author(s): Robert D. Cameron.

Title: . Not just E-journals: Providing and maintaining access to serials and serial information through the world-wide web.

Technical Report: TR 96-01, School of Computing Science, Simon Fraser University, April 1996.

Abstract: The Directory of Computing Science Journals is being developed as a prototype for Internet serials directories accessible through the World-Wide Web. Each entry in the directory consists of a web page providing basic journal information and a comprehensive collection of links to known Internet-based resources related to the journal. Design goals for the directory are to provide a comprehensive, reliable, efficient, authoritative and up-to-date information resource for computing science journals. Particular emphasis has been placed on proactive maintenance techniques for correction and enhancement of the link collections.**[Hepting et al., 1996]**

Author(s): D. H. Hepting, F. D. Fracchia, J. C. Dill, and R. D. Russell.

Title: . Cogito: a system for computer-aided visualization.

Technical Report: TR 96-02, School of Computing Science, Simon Fraser University, August 1996. Presented at SIGGRAPH 96 as an unpublished technical sketch.

Abstract: We contend that the computer can aid the creative endeavour of visualization beyond image production. Our system will allow users to effectively find personally meaningful visualizations amongst all those available.**[Zaiane et al., 1996]**

Author(s): Osmar Zaiane, Peter Triantafillou, and Jiawei Han.

Title: . Consistency in card-based mobile databases: Sharing digital money by replicating smart cards.

Technical Report: TR 96-03, School of Computing Science, Simon Fraser University, August 1996.

Abstract: Memory cards and smart cards have a large storage capacity turning them into hand-held databases. Assuming seldom connection between hosts, updating these mobile databases can lead to consistency problems. This paper discusses some of these problems and attemps to give some solutions with applications in electronic commerce.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Cameron, 1995]**

Author(s): Robert D. Cameron.

Title: . A universal citation database as a catalyst for reform in scholarly communication.

Technical Report: TR 95-07, School of Computing Science, Simon Fraser University, May 1995.

Abstract: A universal, Internet-based, bibliographic and citation database would link every scholarly work ever written—no matter how published—to every work that it cites and every work that cites it. Such a database could revolutionize many aspects of scholarly communication: literature research, keeping current with new literature, evaluation of scholarly work, choice of publication venue, among others. Models are proposed for the cost-effective operational and technical organization of such a database as well as for a feasible initial goal: the semi-universal citation database.**[Cowperthwaite et al., 1995]**

Author(s): David J. Cowperthwaite, M. Sheelagh T. Carpendale, and F. David Fracchia.

Title: . Distortion viewing techniques for 3-dimensional data.

Technical Report: TR 95-06, School of Computing Science, Simon Fraser University, November 1995.

Abstract: As the use of three-dimensional information presentation becomes more prevalent the need for effective viewing tools grows accordingly. Much work has been done in developing tools for 2D spaces which allow for detail in context views. We examine the extension of such 2D methods to 3D and explore the limitations encountered in accessing internal regions of the data with these methods. We then describe a novel solution to this problem of internal access with the introduction of a distortion function which creates a clear line of sight to the focus revealing sections previously obscured. The distortion is symmetric about the line of sight and is smoothly integrated back into the original 3D layout.**[Dakic et al., 1995]**

Author(s): Tamara Dakic, Junas Adhikary, Daya Ram Gaur, and W. Ken Jackson.

Title: . Backtrack-free search for resource allocation problems.

Technical Report: TR 95-02, School of Computing Science, Simon Fraser University, February 1995.

Abstract: Freuder has previously given conditions that guarantee backtrack-free search when solving binary constraint networks. Freuder's condition relies on the ``width'' of the constraint network. Computing the width of a constraint network involves minimizing over all possible orderings of the vertices in the network. This paper presents an similar result for backtrack-free search when solving resource allocation problems. Our result uses a measure analogous to width that can be computed more easily.**[Gavoille and Guévremont, 1995]**

Author(s): Cyril Gavoille and Eric Guévremont.

Title: . Worst case bounds for shortest path interval routing.

Technical Report: TR 95-03, School of Computing Science, Simon Fraser University, February 1995.

Abstract: Consider*shortest path interval routing*, a popular memory-balanced method for solving the routing problem on arbitrary networks. Given a network G, let IRS (G) denote the maximum number of intervals necessary to encode groups of destinations on an edge, minimized over all shortest path interval routing schemes on G. In this paper, we establish tight worst case bounds on IRS (G). More precisely for any n, we construct a network G of n nodes with IRS (G) in Θ(n), thereby improving on the best known lower bound of Ω (n / log n). We also establish a worst case bound on bounded degree networks: for any Δ geq 3 and any n, we construct a network G_Δ of n nodes and maximum degree Δ with IRS ( G_Δ) in Ω( sqrt n).**[Han and Fu, 1995]**

Author(s): Jiawei Han and Yongjian Fu.

Title: . Discovery of multiple-level association rules from large databases.

Technical Report: TR 95-05, School of Computing Science, Simon Fraser University, March 1995.

Abstract: Discovery of association rules from large databases has been a focused topic recently in the research into database mining. Previous studies discover association rules at a single concept level, however, mining association rules at multiple concept levels may lead to finding more informative and refined knowledge from data. In this paper, we study efficient methods for mining multiple-level association rules from large transaction databases. A top-down progressive deepening method is proposed by extension of some existing (single-level) association rule mining algorithms. In particular, a group of algorithms for mining multiple-level association rules are developed and their relative performance are tested on different kinds of transaction data. Relaxation of the rule conditions for finding flexible multiple-level association rules is also discussed. Our study shows that efficient algorithms can be developed for the discovery of interesting and strong multiple-level association rules from large databases.**[Hepting et al., 1995]**

Author(s): Daryl H. Hepting, Weizhang Huang, and Robert D. Russell.

Title: . Case study: A visual tool for moving mesh numerical methods.

Technical Report: TR 95-09, School of Computing Science, Simon Fraser University, November 1995. Presented during*Visualization '95*Case Study session on Philosophy and Mathematics, but it does not appear in the conference proceedings.

Abstract: Time-dependent partial differential equations are indispensable in scientific and engineering research, where they are used to model physical phenomena such as fluid flow and combustion. Moving mesh numerical solution methods have proven to be particularly well-suited to compute solutions to such problems which change non-uniformly over time. Computer visualization of moving mesh methods is an extension of an approach using computers to study mathematics, first advocated by von Neumann. Uses of this visualization include verifying the correct operation of the numerical algorithm; providing insight into improvements to the numerical algorithm; and interpreting the operation of the numerical algorithm to a larger community of mathematics researchers.**[Jackson and Havens, 1995]**

Author(s): W. Ken Jackson and William S. Havens.

Title: . Committing to user choices in mixed initiative CSPs.

Technical Report: TR 95-01, School of Computing Science, Simon Fraser University, January 1995.

Abstract: In a mixed initiative system for solving a constraint satisfaction problem (CSP), both the user and the system make choices in trying to find a solution. These choices typically involve deciding what values to assign to variables. A desirable property in mixed initiative systems is that they commit to user choices: the user assignments should be retained as long as there is a complete solution containing those assignments. This paper shows that several backtracking algorithms, that are typically used to solve constraint satisfaction problems, do not commit to user choices. We also describe a slight modification of Ginsberg's dynamic backtracking algorithm that does commit to user choices.**[Kubon, 1995]**

Author(s): Petr Kubon.

Title: . Topic, focus, and some aspects of the semantics of discourse.

Technical Report: TR 95-04, School of Computing Science, Simon Fraser University, February 1995.

Abstract: This paper elaborates on the idea that negation should be represented as a tripartite structure sensitive to focus, with topic corresponding to restrictive clause and focus to nuclear scope. Using data on anaphora, we show that the default placement for topic is not restrictive clause but rather the context in which the quantificational structure for negation is processed. In addition, we present some new arguments in favour of a tripartite representation of negation.**[Popowich, 1995]**

Author(s): Fred Popowich.

Title: . A chart generator for shake and bake machine translation.

Technical Report: TR 95-08, School of Computing Science, Simon Fraser University, July 1995.

Abstract: A generation algorithm based on an active chart parsing algorithm is introduced which can be used in conjunction with a Shake and Bake machine translation system. A concise Prolog implementation of the algorithm is provided, and some performance comparisons with a shift-reduce based algorithm are given which show the chart generator is much more efficient for generating all possible sentences from an input specification.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Cameron, 1994]**

Author(s): Robert D. Cameron.

Title: . To link or to copy?—four principles for materials acquisition in internet electronic libraries.

Technical Report: TR 94-08, School of Computing Science, Simon Fraser University, November 1994. IN PREPARATION.

Abstract: Many of the early experiments in internet electronic libraries have used rather*ad hoc*approaches to material acquisition. Four principles are suggested as a starting point for an internet material acquisition policy. It is argued that systematic application of these principles will allow internet librarians to provide their patrons with authoritative, up-to-date, reliable and useful access to internet materials.**[Ferreira and Peters, 1994]**

Author(s): Afonso Ferreira and Joseph G. Peters.

Title: . Finding smallest paths in rectilinear polygons on a hypercube multiprocessor.

Technical Report: TR 94-04, School of Computing Science, Simon Fraser University, July 1994.

Abstract: A smallest path between two points in a polygon is a rectilinear path that simultaneously minimizes distance and the number of line segments in the path. In this paper we show how to find a smallest path between and two points in a simple rectilinear polygon with n vertices on a hypercube multiprocessor with max(n,p) processors in time O(t + log n) where p = n log n and t = O(log^2 n) are the current best bounds for solving the next element search problem.**[Fraigniaud and Peters, 1994]**

Author(s): Pierre Fraigniaud and Joseph G. Peters.

Title: . Minimum linear gossip graphs and maximal linear (δ,k)-gossip graphs.

Technical Report: TR 94-06, School of Computing Science, Simon Fraser University, October 1994.

Abstract:*Gossiping*is an information dissemination problem in which each node of a communication network has a unique piece of information that must be transmitted to all other nodes using two-way communications between pairs of nodes along the communication links of the network. In this paper, we study gossiping using a*linear cost model*of communication which includes a start-up time and a propagation time which is proportional to the amount of information transmitted.A*minimum linear gossip graph*is a graph (modelling a network), with the minimum possible number of links, in which gossiping can be completed in minimum time under the linear cost model. For networks with an even number of nodes, we prove that the structure of minimum linear gossip graphs is independent of the relative values of the start-up and unit propagation times. We prove that this is not true when the number of nodes is odd. We present four infinite families of minimum linear gossip graphs. We also present minimum linear gossip graphs for all even numbers of nodes n leq 32 except n=22.A*linear (Δ,k)-gossip graph*is a graph with maximum degree Δ in which gossiping can be completed in k*rounds*with minimum propagation time. We present three infinite families of*maximal linear*(Δ,k)-*gossip graphs*, that is, linear (Δ,k)-gossip graphs with a maximum number of nodes. We show that not all minimum broadcast graphs are maximal linear (Δ,k)-gossip graphs.**[Györi et al., 1994]**

Author(s): Ervin Györi, Frank Hoffman, Klaus Kriegel, and Thomas Shermer.

Title: . Generalized guarding and partitioning for rectilinear polygons.

Technical Report: TR 94-01, School of Computing Science, Simon Fraser University, January 1994.

Abstract: A T_{k}–guard G in a rectilinear polygon P is a tree of diameter k completely contained in P. The guard G is said to cover a point x if x is visible (or rectangularly visible) from some point contained in G. We investigate the function r(n,h,k), which is the largest number of T_{k}–guards necessary to cover any rectilinear polygon with h holes and n vertices. The aim of this paper is to prove new lower and upper bounds on parts of this function. In particular, we show the following bounds: begin enumerate item r(n,0,k) leq left lfloor frac nk+4 right rfloor , with equality for even k item r(n,h,1) = left lfloor frac 3n+4h+416 right rfloor item r(n,h,2) leq left lfloor frac n6 right rfloor . end enumerate These bounds, along with other lower bounds that we establish, suggest that the presence of holes reduces the number of guards required, if k>1. In the course of proving the upper bounds, new results on partitioning are obtained.**[Han et al., 1994]**

Author(s): Jiawei Han, Osmar R. Zaiane, and Yongjian Fu.

Title: . Resource and knowledge discovery in global information systems: A multiple layered database approach.

Technical Report: TR 94-10, School of Computing Science, Simon Fraser University, November 1994. (also CSS/LCCR TR94-24).

Abstract: With huge amounts of information connected to the global information network (Internet), efficient and effective discovery of resource and knowledge from the ``global information base'' has become an imminent research issue, especially with the advent of the Information Highway. In this article, a multiple layered database (MLDB) approach is proposed to handle the resource and knowledge discovery in global information base. A multiple layered database is a database formed by generalization and transformation of the information, layer-by-layer, starting from the original information base (treated as layer-0, the primitive layer). Information retrieval, data mining, and data analysis techniques can be used to extract and transform information from a lower layer database to a higher one. Layer-1 and higher layers of an MLDB can be modeled by an extended-relational or object-oriented model, constructed automatically, and updated incrementally. Information at all the layers except the primitive one can be stored, managed and retrieved by the available database technology; resources can be found by controlled search through different layers of the database; and knowledge discovery can be performed efficiently in such a multiple layered database.**[Heydemann et al., 1994]**

Author(s): Marie-Claude Heydemann, Joseph G. Peters, and Dominique Sotteau.

Title: . Spanners of hypercube-derived networks.

Technical Report: TR 94-02, School of Computing Science, Simon Fraser University, April 1994.

Abstract: A spanning subgraph G' of a simple undirected graph G is a*t-spanner*of G if every pair of vertices that are adjacent in G are at distance at most t in G'. The parameter t is called the*dilation*of the spanner. Spanners with small dilations have many applications including their use as low-cost approximations of communication networks with only small degradations in performance. In this paper, we derive spanners with small dilations for four closely related bounded-degree approximations of hypercubes: butterflies, cube-connected cycles, binary de Bruijn graphs, and shuffle-exchange graphs. We give both direct constructions, and methods for deriving spanners for one class of graphs from spanners for another class. We prove that most of our spanners are minimum in the sense that spanners with fewer edges have larger dilations.**[Kane and Peters, 1994]**

Author(s): Jave O. Kane and Joseph G. Peters.

Title: . Line broadcasting in cycles.

Technical Report: TR 94-11, School of Computing Science, Simon Fraser University, December 1994.

Abstract: Broadcasting is the process of transmitting information from an originating node (processor) in a network to all other nodes in the network. A local broadcast scheme only allows a node to send information along single communication links to adjacent nodes, while a line broadcast scheme allows nodes to use paths of several communication links to call distant nodes. The minimum time possible for broadcasting in a network of n nodes when no node is involved in more than one communication at any given time is log n phases. Local broadcasting is not sufficient, in general, for broadcasting to be completed in minimum-time; line broadcasting is always sufficient. An optimal line broadcast is a minimum-time broadcast that uses the smallest possible total number of communication links. In this paper, we give a complete characterization of optimal line broadcasting in cycles, and we develop efficient methods for constructing optimal line broadcast schemes.**[MacDonald and Shermer, 1994]**

Author(s): Glenn MacDonald and Thomas Shermer.

Title: . Isomorphism of spiral polygons.

Technical Report: TR 94-09, School of Computing Science, Simon Fraser University, November 1994.

Abstract: We call two polygons*isomorphic*if there is a one-to-one mapping between their*points*(not vertices) that preserves visibility. In this paper, we establish necessary and sufficient conditions for two spiral polygons to be isomorphic, and give an nsquared algorithm to detect such isomorphism. As a byproduct we show that there is a canonical representation for the entire visibility structure of a spiral polygon that can be represented in O(r log r) bits, where r is the number of reflex vertices in the polygon. We also show that the continuous graph of visibility on the points of a spiral polygon is an (uncountably infinite) interval graph, and that no other polygons have this property.**[Popowich, 1994]**

Author(s): F. Popowich.

Title: . Improving the efficiency of a generation algorithm for shake and bake machine translation using head-driven phrase structure grammar.

Technical Report: TR 94-07, School of Computing Science, Simon Fraser University, October 1994.

Abstract: A Shake and Bake machine translation algorithm for Head-Driven Phrase Structure Grammar is introduced based on the algorithm proposed by Whitelock for unification categorial grammar. The translation process is then analysed to determine where the potential sources of inefficiency reside, and some proposals are introduced which greatly improve the efficiency of the generation algorithm. Preliminary empirical results from tests involving a small grammar are presented, and suggestions for greater improvement to the algorithm are provided.**[Storey et al., 1994]**

Author(s): M.-A. D. Storey, F. D. Fracchia, and S. Carpendale.

Title: . A top-down approach to algorithm animation.

Technical Report: TR 94-05, School of Computing Science, Simon Fraser University, September 1994.

Abstract: Algorithm animations have proven to be successful for the presentation and exploration of algorithms. However, when viewing an animation of a complex algorithm, the amount of detail displayed can be overwhelming. In an effort to increase the level of comprehension, this paper introduces*expandable buttons*as a new interface approach for animating algorithms. Expandable buttons allow for top-down explorations of algorithms by providing interactive control over the amount of detail displayed and visually integrating the algorithm with the animation. In essence, the algorithm becomes the user interface.**[Ueberla, 1994]**

Author(s): Joerg P. Ueberla.

Title: . On using selectional restriction in language models for speech recognition.

Technical Report: TR 94-03, School of Computing Science, Simon Fraser University, July 1994.

Abstract: In this paper, we investigate the use of selectional restriction – the constraints a predicate imposes on its arguments – in a language model for speech recognition. We use an un-tagged corpus, followed by a public domain tagger and a very simple finite state machine to obtain verb-object pairs from unrestricted English text. We then measure the impact the knowledge of the verb has on the prediction of the direct object in terms of the perplexity of a cluster-based language model. The results show that even though a clustered bigram is more useful than a verb-object model, the combination of the two leads to an improvement over the clustered bigram model.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Ammann and Cameron, 1993]**

Author(s): Manuel M. Ammann and Robert D. Cameron.

Title: . Towards program manipulation in-the-large: Inter-module renaming.

Technical Report: 93-08, School of Computing Science, Simon Fraser University, July 1993.

Abstract: We demonstrate the concept of program manipulation in-the-large with the example of inter-module renaming. A prototype tool has been developed to automate the renaming of identifiers. The renamer is particularly useful when applied locally in a large module with complicated structure or globally in a set of interdependent modules, since manual renaming can be difficult in these cases. Versions of the renamer have been implemented as stand-alone tools in the Turbo Pascal and Modula-3 environments and have been applied to moderately sized software projects. We are continuing to develop other manipulation in-the-large tools with the ultimate goal of integrating them into a high-level, manipulation-based maintenance and development environment.**[Bose et al., 1993]**

Author(s): Prosenjit Bose, Thomas Shermer, Godfried Toussaint, and Binhai Zhu.

Title: . Guarding polyhedral terrains.

Technical Report: 93-01, School of Computing Science, Simon Fraser University, January 1993.

Abstract: In this report, we prove that |_ n/2 _| vertex guards are always sufficient and sometimes necessary to guard the surface of an n-vertex polyhedral terrain. We also show that |_ (4n-4)/13 _| edge guards are sometimes necessary to guard the surface of an n-vertex polyhedral terrain. The upper bound on the number of edge guards is |_ n/3 _| [ER C92]. Since both upper bounds are based on the four color theorem, no practical polynomial time algorithm achieving these bounds seems to exist, but we uncover a linear time algorithm for placing |_ 3n/5 _| vertex guards for covering a polyhedral terrain and a linear time algorithm for placing |_ 2n/5 _| edge guards.**[Bremner and Shermer, 1993]**

Author(s): David Bremner and Thomas Shermer.

Title: . Point visibility graphs and restricted-orientation convex cover.

Technical Report: 93-07, School of Computing Science, Simon Fraser University, July 1993.

Abstract: A visibility relation can be viewed as a graph: the uncountable graph of a visibility relationship between points in a polygon P is called the point visibility graph(PVG) of P. In this paper we explore the use of perfect graphs to characterize tractable subproblems of visibility problems. Our main result is a characterization of which polygons are guaranteed to have weakly triangulated PVGs, under a generalized notion of visibility called O-visibility. Let O denote a set of line orientations. Rawlins and Wood call a set P of points O-convex if the intersection of P with any line whose orientation belongs to O is either empty or connected; they call a set of points O-concave if it is not O-convex. Two points are said to be O-visible if there is an O-convex path between them. Let O' be the set of orientations of minimal O-concave portions of the boundary of P. Our characterization of which polygons have weakly-triangulated PVGs is based on restricting the cardinality and span of O'. This characterization allows us to exhibit a class of polygons admitting an O(n**8) algorithm for O-convex cover. We also show that for any finite cardinality O, O-convex cover and O-star cover are in NP, and have polynomial time algorithms for any fixed covering number. Our results imply previous results for the special case of O=0,90 of Culberson and Reckhow, and Motwani, Raghunathan, and Saran.**[Hepting, 1993]**

Author(s): Daryl H. Hepting.

Title: . A layered approach for computerization of a multimedia archive.

Technical Report: 93-14, School of Computing Science, Simon Fraser University, December 1993.

Abstract: Merce Cunningham is widely regarded as the father of modern dance. His work, which began in the mid-1930's, is a treasure from which all the world can benefit. Much of his work, captured in several different media, is preserved in an archive maintained by the Cunningham Dance Foundation. This archive has always been a tool with which Mr. Cunningham has furthered his art. In order for it to effectively support the work of other artists, its vast collection of information must be made accessible to a wide audience, in meaningful ways. The challenge of the computerization project is to apply technology to provide that increased access, without compromising artistic integrity.**[Kodric, 1993]**

Author(s): Sandi Kodric.

Title: . Flat clause constructions in slovene.

Technical Report: 93-12, School of Computing Science, Simon Fraser University, August 1993.

Abstract: The Slovene language exhibits a large amount of freedom in the ordering of constituents within a clause. I argue that verb phrases, which are present in many languages, are not present in Slovene. If clauses are analyzed as flat structures (without verb phrase nodes), then some discontinuities are avoided and word order can be accounted for by local precedence constraints. An HPSG model of the clause is sketched which includes a partial account of word order and referential interpretation of pronouns.**[Liestman and Richards, 1993]**

Author(s): Arthur L. Liestman and Dana Richards.

Title: . Perpetual gossiping.

Technical Report: 93-05, School of Computing Science, Simon Fraser University, June 1993.

Abstract: In this paper, we introduce a new information dissemination problem in which gossiping is to occur continuously but with restricted use of the network. In this problem, information continues to be generated by each member of the network and, thus, the gossip process must be ongoing. However, in order to allow the network to be used for other purposes, the communications used by the gossip process are limited to k calls per time unit. We present some preliminary results on this new problem.**[Liestman and Shermer, 1993]**

Author(s): Arthur L. Liestman and Thomas C. Shermer.

Title: . Degree-constrained network spanners with non-constant delay.

Technical Report: 93-06, School of Computing Science, Simon Fraser University, July 1993.

Abstract: A spanning subgraph S = (V,E') of a connected simple graph G = (V,E) is a f(x)-spanner if for any pair of nodes u and v d-sub-S(u,v) >= f(d-sub-G(u,v)) where d-sub-G and d-sub-S are the usual distance functions in graphs G and S, respectively. The delay of the f(x)-spanner is f(x) - x. We find (2.5 * sqroot((3x+6)/4 + 6 + x))-spanners for two-dimensional grids with maximum degree 3. We prove that the delay of these spanners is within a constant factor of optimal. We describe a (x/k + k + 8 - 7/k + x)-spanner of the X-tree with maximum degree 3. We prove that the delay of this spanner is within a constant factor of optimal. We describe a (2 + x)-spanner of the pyramid with maximum degree 6 and a (x/k + k + 8 - 7k + x)-spanner of the pyramid with maximum degree 5. We prove that the delay of the latter spanner is within a constant factor of optimal.**[Liestman et al., 1993]**

Author(s): Arthur L. Liestman, Thomas C. Shermer, and Christopher R. Stolte.

Title: . Degree-constrained spanners for multi-dimensional grids.

Technical Report: 93-11, School of Computing Science, Simon Fraser University, August 1993.

Abstract: A spanning subgraph S = (V, E') of a connected simple graph G = (V, E) is a f(x)-*spanner*if for any pair of nodes u and v, d_{S}(u,v) leq f(d_{G}(u,v)) where d_{G}and d_{S}are the usual distance functions in graphs G and S, respectively. The*delay*of the f(x)-spanner is f(x) - x. We construct two spanners with maximum degree 4 for d-dimensional grids, one with delay 2d-4 and one with delay 2( cfrac d2+ cfrac d-24+2) and discuss generalizations of these spanners. We also construct a (5d + 4 +x)-spanner with maximum degree 3 for d-dimensional grids. For the particular cases of 3- and 4-dimensional grids, we improve upon this construction giving (6+x)-spanners and (14+x)-spanners, respectively.**[Peters and Peters, 1993]**

Author(s): David B. Peters and Joseph G. Peters.

Title: . Bounded depth broadcasting.

Technical Report: 93-10, School of Computing Science, Simon Fraser University, August 1993.

Abstract: Broadcasting is an information dissemination problem in which messages originating at one site of an information network (modelled as a graph) must be transmitted to all other sites as quickly as possible. In this paper we study broadcasting in networks in which information degenerates with each transmission, so there is a limit on the number of times information can be retransmitted before it becomes unusable. We prove lower and upper bounds on the time to broadcast in this setting and on the minimum number of communication links necessary to permit minimum time broadcasting from any originator. We present several general constructions that produce infinite families of optimal networks (minimum time and minimum number of communication links). We also exhibit a number of small optimal networks that are not produced by the general constructions.**[Peters and Syska, 1993]**

Author(s): Joseph G. Peters and Michel Syska.

Title: . Circuit-switched broadcasting in torus networks.

Technical Report: 93-04, School of Computing Science, Simon Fraser University, May 1993.

Abstract: In this paper we present two broadcasting algorithms for 2-dimensional torus networks (wrap-around meshes) that use synchronous circuit-switched routing. The first algorithm is based on a recursive tiling of a torus and requires lceil log_{5}(n^{2}) rceil phases and 2 lfloor frac n2 rfloor intermediate switch settings to broadcast in an n times n torus. Both of these quantities match the lower bounds. The second algorithm combines circuit-switching with the pipelining and arc-disjoint spanning trees techniques that are commonly used to speed up store-and-forward routing. For long messages of length L, the total transmission time is frac L τ2 where 1 / τ is the bandwidth of the communication links. This is within a factor of 2 of the lower bound.**[Popowich, 1993]**

Author(s): Fred Popowich.

Title: . Lexical characterization of local dependencies with tree unification grammar.

Technical Report: 93-13, School of Computing Science, Simon Fraser University, September 1993.

Abstract: Tree Unification Grammar (TUG) is a member of the family of highly lexical unification-based grammar formalisms. Its basic grammar structures are partial descriptions of binary trees which can be ``overlaid'' but not structurally altered. TUG allows an elegant lexical characterization of linguistic phenemona often handled by grammar rules or principles in other formalisms. We examine the organization of phonological, syntactic, semantic and anaphoric information in the lexicon, and illustrate how dependencies between linguistic constituents can be characterized in terms of local constraints using an extended domain of locality. The dependencies that concern us here are those required in a principled treatment of reflexivization and controlled complements in English. We also explain how the TUG formalism differs from closely related grammar formalisms, specifically, lexicalized tree adjoining grammar, unification categorial grammar and head-driven phrase structure grammar.**[Ueberla, 1993a]**

Author(s): Joerg P. Ueberla.

Title: . The generalized NPOS language model for speech recognition.

Technical Report: 93-09, School of Computing Science, Simon Fraser University, July 1993.

Abstract: Current language models for speech recognition only contain knowledge of frequency counts of two or three consecutive words. In order to overcome this limitation, we propose a generalization of existing models with respect to two aspects. First, any information from the words seen so far can be incorporated into the model. Second, different types of information can be used for the two probabilities that are part of a commonly used model. We show that this generalization is a framework that can incorporate linguistically relevant information. Preliminary experiments also show that it has the potential to lead to language models of better quality. However, from a practical point of view, only a very small overall improvement was obtained in these preliminary experiments. We argue that this is due to a lack of knowledge as to what information will lead to a significant improvement in the quality of a language model. Future work should further investigate this matter.**[Ueberla, 1993b]**

Author(s): Joerg P. Ueberla.

Title: . State language models for speech recognition.

Technical Report: 93-03, School of Computing Science, Simon Fraser University, April 1993.

Abstract: In this report, we propose a strategy to improve language models for speech recognition. To this end, we define a class of models called state language models and show that most currently employed models are part of this class. However, we argue that these currently used models cover only a small part of all possible state language models and we argue that a more systematic study of this class should be made. A possible tool for this purpose and a rough outline of future research are also presented.**[Ueberla, 1993c]**

Author(s): Joerg P. Ueberla.

Title: . Taking a closer look at a simple bipos language model.

Technical Report: 93-02, School of Computing Science, Simon Fraser University, April 1993.

Abstract: This paper introduces a method to measure which parts of a language model perform particularly well or poorly. In our view, this information is very important and can be a valuable help in steering future research efforts. We apply the proposed method to a commonly used language model (bipos) and, based on the results, we introduce a new modeling for unknown words which leads to a reduction in perplexity of at least 14%.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Bermond and Fraigniaud, 1992]**

Author(s): Jean-Claude Bermond and Pierre Fraigniaud.

Title: . Broadcasting and NP-completeness.

Technical Report: 92-01, School of Computing Science, Simon Fraser University, 1992.

Abstract: In this note, we answer two questions arising in broadcasting problems in networks. We first describe a new family of minimum broadcast graphs. Then we give a proof due to Alon of the NP-completeness of finding disjoint spanning trees of minimum depth, rooted at a given vertex.**[Bermond et al., 1992]**

Author(s): Jean-Claude Bermond, Pierre Fraigniaud, and Joseph G. Peters.

Title: . Antepenultimate broadcasting.

Technical Report: 92-03, School of Computing Science, Simon Fraser University, March 1992.

Abstract: Broadcasting is an information dissemination problem in which information originating at one node of a communication network (modelled as a graph) must be transmitted to all other nodes as quickly as possible. A broadcast graph is a graph which permits broadcasting from any originator in minimum time. In this paper, we present new methods for constructing sparse broadcast graphs. Our constructions are based on graph compounding operations which are relative to vertex sets with certain properties that depend on the broadcast protocols of the graphs. We show that many previous methods for constructing sparse broadcast graphs are special cases of our methods. We demonstrate our constructions by producing new sparse broadcast graphs and by showing how many previously constructed graphs can be obtained in a systematic way.**[Burton, 1992]**

Author(s): F. Warren Burton.

Title: . Guaranteeing good space bounds for parallel programs.

Technical Report: 92-10, School of Computing Science, Simon Fraser University, November 1992.

Abstract: The space requirements of a parallel program maybe spectacularly different from the space requirements of an equivalent sequential program. For example, we will see that a program that will run in constant space sequentially may require space proportional to its run time if run in parallel with any reasonable scheduling algorithm. We will assume that each parallel program has an underlying sequential execution order that maybe used as a basis for judging storage requirements. We propose a restriction that is sufficient to insure that any program that will run in n units of space sequentially can run in mn units of space on m processors. It is always possible to transform any program to one that satisfies this restriction, however some potential parallelism maybe lost in the transformation. Alternatively, by restricting ourselves to certain language features, we can insure that all programs meet the restrictions. For any parallel program that satisfies the restriction, there is a space efficient scheduling strategy that will be within a factor of two of being optimal with respect to time.**[Cameron, 1992]**

Author(s): Robert D. Cameron.

Title: . Extending context-free grammars with permutation phrases.

Technical Report: 92-07, School of Computing Science, Simon Fraser University, June 1992.

Abstract: A*permutation phrase*is a grammatical phrase that specifies a syntactic construct as any permutation of a set of constituent elements. Permutation phrases allow for the concise and natural expression of free-order constructs as found in programming languages and notations such as C, COBOL, sc Bib TeX and Unix command lines. The conciseness and clarity of expression that permutation phrase grammars offer over context-free grammars is illustrated through a case study of the declarations in C. The parsing problem for permutation phrase grammars is considered and it is shown how efficient linear-time parsing can be achieved for permutation phrase grammars satisfying an extended notion of the LL(1) property.**[Colbourn and Harms, 1992]**

Author(s): Charles J. Colbourn and Daryl D. Harms.

Title: . Evaluating performability: Most probable states and bounds.

Technical Report: 92-06, School of Computing Science, Simon Fraser University, May 1992.**[Delgrande, 1992]**

Author(s): James P. Delgrande.

Title: . Miscellaneous examples for default reasoning.

Technical Report: 92-05, School of Computing Science, Simon Fraser University, May 1992.

Abstract: This note is a compendium of examples for default reasoners. The intended interpretation of a default A => B is ``if A then normally B''; see Delgrande87 (AIJ) and Delgrande88 (AIJ) for more details. It is p ossible that other approaches, based on differing intuitions, may yield differing results for some of these examples. Hence a consistency-based approach such as Reiter's would not necessarily be expected to agree with all of the following, nor would a probability-based account of default reasoning. This particular approach is also more general than many others, in that arbitrary formulae of propositional logic are allowed in the antecedent and consequent of a default, as well as for the world knowledge. In addition, negated defaults are permitted, as are necessary implications.**[Hafer, 1992]**

Author(s): Louis J. Hafer.

Title: . Computational experience using arc consistency for mixed 0-1 linear programming.

Technical Report: 92-09, School of Computing Science, Simon Fraser University, August 1992.

Abstract: This report describes the implementation of a partial arc consistency algorithm in bonsai, a program for mixed 0-1 linear programming. The equivalence of the arc consistency algorithm to a stylized form of Chvtal-Gomory inequality is shown for integer variables. Computational experience on a selection of 51 problems from MIPLIB is reported; 34 subproblems are succesfully solved.**[Kodric et al., 1992]**

Author(s): Sandi Kodric, Fred Popowich, and Carl Vogel.

Title: . The HPSG-PL system—version 1.2.

Technical Report: 92-08, School of Computing Science, Simon Fraser University, June 1992.

Abstract: HPSG-PL is a Prolog implementation of Head-driven Phrase Structure Gram- mar (HPSG). The system consists of a lexical compiler, constraint processor, chart parser and a module for linking the parser to a graphic interface. Using this system, a user can examine the properties of the HPSG formalism itself, and can investigate characteristics of specific grammars that utilize the formalism. This document details how the system implements the distinguishing aspects of HPSG. Modifications to the system are described which make this version a far more complete implementation of the grammar framework, including some of the most recent developments in the theory, than previous versions and other extant implementations of HPSG environments. In the appendix, a sample grammar which covers a fragment of English is provided.**[Shermer and Toussaint, 1992]**

Author(s): Thomas Shermer and Godfried Toussaint.

Title: . Characterizations of star-shaped polygons.

Technical Report: 92-11, School of Computing Science, Simon Fraser University, December 1992.

Abstract: An interior chord of a simple polygon P is a line segment [x,y] in P that intersects the boundary of P only at both endpoints x and y. P is said to be weakly visible from [x,y] if for every point v in P there exists a point w in [x,y] such that [v,w] lies in P. In this paper we define geodesic convexity with respect to a polygon, and then show that Helly's theorem holds for geodesic convexity. This result is then used to obtain Krasnoselskii-type characterizations of star-shaped polygons in terms of weak visibility properties of their interior chords. We also provide a new Krasnoselskii-type characterization of isothetic star-shaped polygons, and obtain a simple new proof of Krasnoselskii's original theorem.**[Ueberla, 1992]**

Author(s): Joerg Ueberla.

Title: . The Hybrid Hopfield Network for associative memory.

Technical Report: 92-04, School of Computing Science, Simon Fraser University, April 1992.

Abstract: This paper proposes a new model for associative memory, the Hybrid Hopfield Network (HHN), that uses a graph-theoretic algorithm in a connectionist framework. It is based on Jagota's recently proposed Hopfield Style Network and addresses its main weakness, the existence of spurious memories. First, a new characterization of spurious memories based on graph theory is introduced. Given this characterization, we can then detect whether spurious memories are created. Once they are detected, a mechanism to avoid them can be incorporated into the HHN. The HHN therefore ensures perfect storage and recall and can be used efficiently for very sparse patterns. Its application to the task of storing words in a dictionary is described.**[Vogel et al., 1992]**

Author(s): Carl Vogel, Fred Popowich, and Nick Cercone.

Title: . A declarative specification of path-based reasoning.

Technical Report: 92-02, School of Computing Science, Simon Fraser University, March 1992.

Abstract: Inheritance networks are often based on relationships like is-a and is-not-a among individuals and classes and are presented quite simply as directed (usually) acyclic graphs of explicit links. The definitions which specify the links that are implicit in a network are considerably more complex because the network is not tree-structured. A single node may have multiple superordinate and subordinate nodes; inheritance from each node must be accounted for, as well as for the possibility that negative links (which represent is-not-a relations) admit conflicting inheritance information. A plethora of systems described in the literature offer a wide variety of ways to resolve conflicting chains of explicit links in determining inheritance properties. Nonetheless their procedural definition for resolving conflicting path information clouds the relationship between styles of conflict resolution and corresponding definitions of inheritance. We give a declarative specification of a popular inheritance system and show how simple changes to this specification can result in different path-based reasoners. In the process, we provide a better understanding of the fundamental differences between some of the more popular path-based inheritance reasoners.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Andrews et al., 1991]**

Author(s): Jamie Andrews, Veronica Dahl, and Fred Popowich.

Title: . A relevance logic characterization of static discontinuity grammars.

Technical Report: 91-10, School of Computing Science, Simon Fraser University, December 1991.

Abstract: A characterization of Static Discontinuity Grammars (SDGs), a logic grammar formalism due to Dahl, is given in this paper. A relevance logic sequent calculus proof system is given which is shown to be equivalent to SDGs for parsing problems, in the sense that a string of terminal symbols is accepted by a grammar if and only if the corresponding sequent is derivable in the calculus. One calculus is given for each of the two major interpretations of SDGs; the two calculi differ by only a small restriction in one rule. The conceptual connection between relevance logic and grammar formalisms in general is also explored. Several linguistic formalisms, such as categorial grammars, share the ``parsing as deduction'' approach of logic grammars; relevance logic is shown to be a very useful semantic framework for these formalisms, since its bookkeeping about the order and occurrence of assumptions can be used to preserve information about the order and occurrence of words in text.**[Belleville and Shermer, 1991]**

Author(s): Patrice Belleville and Thomas C. Shermer.

Title: . Probing polygons minimally is hard.

Technical Report: 91-12, School of Computing Science, Simon Fraser University, December 1991.

Abstract: Let Gamma be a set of convex unimodal polygons in fixed position and orientation. We prove that the problem of determining whether k probes are sufficient to distinguish among the polygons in Gamma is NP-complete for two types of line probes. This implies that the same results hold for most interesting classes of polygons on which line probes can be used.**[Cai, 1991]**

Author(s): Leizhen Cai.

Title: . Tree 2-spanners.

Technical Report: 91-4, School of Computing Science, Simon Fraser University, 1991.**[Dahl, 1991]**

Author(s): Veronica Dahl.

Title: . Analysis of female underrepresentation in computing sciences departments—what can be done?.

Technical Report: 91-11, School of Computing Science, Simon Fraser University, December 1991.

Abstract: Only 27% of the SFU first year students in Computing Sciences are women. For 1990 graduates,the figure is less than 18%, and for faculty members, less than 7%. This report examines why women are far from constituting roughly the 50% in our school (and in CS departments across the country) that they constitute in the outside world and in many other disciplines.**[Dahl et al., 1991]**

Author(s): Veronica Dahl, Fred Popowich, and Michael Rochemont.

Title: . A principled characterization of dislocated phrases: Capturing barriers with static discontinuity grammars.

Technical Report: 91-9, School of Computing Science, Simon Fraser University, December 1991.

Abstract: Parsing according to the principles of modern linguistic theory is only now becoming a computationally interesting task. We contribute to these developments by illustrating how the account of movement introduced by Chomsky in Barriers can be incorporated into a Static Discontinuity Grammar (SDG). We are concerned with A'-movement as reflected in `wh' movement of arguments and adjuncts. The resulting SDG can be processed by an SDG parser to recover the thematic information and constituency structure associated with a natural language sentence.**[Hafer, 1991]**

Author(s): Louis J. Hafer.

Title: . Bonsai: Teaching a clown to prune trees.

Technical Report: 91-6, School of Computing Science, Simon Fraser University, July 1991.

Abstract: This report describes the implementation of bonsai, a program for mixed-integer linear programming. bonsai is designed to offer maximum flexibility, control, and robustness, while retaining a reasonable level of efficiency. It implements a LP-based branch-and-bound algorithm and supports binary and continuous variables. The tree exploration strategy is depth-first with best-first backtracking. Selection of the next active subproblem is based on a function which incorporates both the objective function and the integer infeasibility of a subproblem. The branching algorithm allows the specification of priorities for selecting branching variables, and the specification of groups of variables which are expanded and fixed simultaneously. A variation of the resource space tour algorithm is used to evaluate the sets of subproblems which result from branching. A partial arc consistency algorithm is incorporated to strengthen the enforcement of integrality constraints and dynamically tighten bounds on variables. Arc consistency and up/down penalties provide complementary techniques for locating binary variables which can be forced to 0 or 1. The search tree and live set data structures are dynamically allocated and pruned, so that there is no a priori limit on the size of the search tree. A multi-level error recovery strategy is used in an attempt to prevent numerical errors from halting the exploration of portions of the search tree. bonsai has been tested on problems arising from research into automated digital hardware design. These problems can be characterised as deterministic machine scheduling problems with variable-duration activities and other complicating constraints. Computational experience shows that bonsai performs significantly better than its predecessor MILP codes. As bonsai is the successor of Bozo, this report is the successor of [7], modified to incorporate the changes required to integrate partial arc consistency techniques into the algorithm originally implemented in Bozo.**[Liestman and Shermer, 1991a]**

Author(s): Arthur L. Liestman and Thomas C. Shermer.

Title: . Additive graph spanners.

Technical Report: 91-5, School of Computing Science, Simon Fraser University, August 1991.

Abstract: A spanning subgraph S = (V,E') of a connected simple graph G = (V,E) is a f(x)-spanner if for any pair of nodes u and v, d-sub-S(u,v) =< f(d-sub-G(u,v)) where d-sub-G and d-sub-S are the usual distance functions in graphs G and S, respectively. We are primarily interested in (t + x)-spanners, which we refer to as additive spanners. We construct low-degree additive t-spanners for X-trees, pyramids and multidimensional grids. We prove, for arbitrary t > 0, that to determine whether a given graph G has an additive t-spanner with no more than m edges is NP-complete.**[Liestman and Shermer, 1991b]**

Author(s): Arthur L. Liestman and Thomas C. Shermer.

Title: . Additive spanners for hypercubes.

Technical Report: 91-3, School of Computing Science, Simon Fraser University, June 1991.

Abstract: A t-spanner of a network is a subnetwork in which every two nodes that were connected by an edge in the original network are connected by a path of at most t edges in the subnetwork. This guarantees that if nodes u and v are at distance d(u,v) in the original network, they are at distance no more than t * d(u,v) in the t-spanner. We introduce a more general definition of spanners. A special case of this more general definition is the additive t-spanner: a subnetwork in which every two nodes u and v that were separated by distance d(u,v) in the original network are at distance no more than t + d(u,v) in the subnetwork. We construct low-degree additive t-spanners for hypercubes.**[Liestman and Shermer, 1991c]**

Author(s): Arthur L. Liestman and Thomas C. Shermer.

Title: . Grid and hypercube spanners.

Technical Report: 91-1, School of Computing Science, Simon Fraser University, 1991.

Abstract: A t-spanner of a network is a subnetwork in which every two nodes that were connected by an edge in the original network are connected by a path of at most t edges in the subnetwork. We present t-spanners with small maximum or average degree for multi-dimensional grids and hypercubes. We show how to construct small maximum degree 3-spanners for d-dimensional grids for d >= 2, small maximum degree 3-spanners for the d-dimensional hypercube, and constant average degree 3-spanners for d-dimensional grids. We also present t-spanners of minimum average degree for 2-dimensional grids.**[Popowich and Vogel, 1991]**

Author(s): Fred Popowich and Carl Vogel.

Title: . The HPSG-PL system.

Technical Report: 91-8, School of Computing Science, Simon Fraser University, October 1991. Superceded by CMPT TR 92-08.

Abstract: HPSG-PL is a Prolog implementation of Head Driven Phrase Structure Grammar. The system consists of a lexical compiler, constraint processor, chart parser and a module for linking the parser to a graphic interface. Using this system, a user can examine the properties of the HPSG formalism itself, and can investigate characteristics of specific grammars that utilize the formalism.**[Shermer, 1991a]**

Author(s): Thomas C. Shermer.

Title: . A linear algorithm for bisecting a polygo.

Technical Report: 91-7, School of Computing Science, Simon Fraser University, July 1991.

Abstract: A line is said to bisect a polygon if half of the area of the polygon is in each of the halfplanes determined by the line. This short note describes a linear-time algorithm to find a line in a given direction which bisects a simple polygon. The previous algorithm for this problem, due to Diazand O'Rourke, takes O(n log n) time. Our algorithm pro-ceeds by first trapezoidizing the polygon and then apply-ing prune-and-search.**[Shermer, 1991b]**

Author(s): Thomas C. Shermer.

Title: . Several short results in the combinatorics of visibility.

Technical Report: 91-2, School of Computing Science, Simon Fraser University, May 1991.

Abstract: This paper describes several short results and musings on art galleries and visibility graphs. First, we obtain tight combinatorial bounds and weak geometric bounds on diagonal guards in simple polygons. Next, we answer some questions of O'Rourke about mobile guards in orthogonal polygons. Third, we exhibit an 11-vertex graph, and prove that it is not an induced subgraph of any point-visibility graph. Finally, we answer a question of Everett concerning the existence of bipartite forbidden induced subgraphs of visibility graphs.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Baker et al., 1990]**

Author(s): Susan Baker, Rob Hamm, and Fred Popowich.

Title: . TreeTool user's manual.

Technical Report: 90-9, School of Computing Science, Simon Fraser University, October 1990.**[Burton and Rayward-Smith, 1990]**

Author(s): F. Warren Burton and V. J. Rayward-Smith.

Title: . Limitations of process management in distributed systems.

Technical Report: 90-4, School of Computing Science, Simon Fraser University, March 1990.**[Delgrande, 1990]**

Author(s): James P. Delgrande.

Title: . A framework for logics of explicit and implicit belief: Extended report.

Technical Report: 90-8, School of Computing Science, Simon Fraser University, November 1990. (PostScript)

Abstract: The epistemic notions of knowledge and belief have most commonly been modelled by means of possible worlds semantics. In such approaches an agent is logically omniscient, in that it knows (or believes) all logical consequences of its beliefs. Consequently, several approaches have been proposed to model systems of explicit belief. Such systems are more suited to modelling finite agents or computers, where logical omniscience does not necessarily hold. In this paper a general framework is developed for the specification of logics of explicit belief. A generalisation of possible worlds, called situations, is adopted. However the notion of an accessibility relation is not employed; instead a sentence is believed if the explicit proposition expressed by the sentence appears among a set of propositions associated with an agent at a situation. This framework is argued to be appropriate for representing propositional logics of explicit belief, as well as the traditional logics of (implicit) belief. Thus it provides a uniform and flexible basis from which various issues of explicit belief may be addressed, and from which systems may be contrasted and compared. A family of logics is developed using this framework, which extends previous approaches and addresses issues raised by these approaches. The more interesting of these logics are tractable, in that determining if a belief follows from a set of beliefs, given certain assumptions, can be accomplished in polynomial time.**[Hafer and Hutchings, 1990]**

Author(s): Lou Hafer and Ed Hutchings.

Title: . Bringing up Bozo.

Technical Report: 90-2, School of Computing Science, Simon Fraser University, August 1990.**[Mahajan and Peters, 1990]**

Author(s): Sanjeev Mahajan and Joseph G. Peters.

Title: . Regularity and locality in k-terminal graphs.

Technical Report: 90-6, School of Computing Science, Simon Fraser University, May 1990.**[Popowich, 1990]**

Author(s): Fred Popowich.

Title: . Parsing, semantic nets and tree unification grammar.

Technical Report: 90-7, School of Computing Science, Simon Fraser University, August 1990.**[Popowich and Vogel, 1990]**

Author(s): Fred Popowich and Carl Vogel.

Title: . Chart parsing head-driven phrase structure grammar.

Technical Report: 90-1, School of Computing Science, Simon Fraser University, February 1990.**[Shermer, 1990a]**

Author(s): Thomas Shermer.

Title: . On recognizing unions of two convex polygons and related problems.

Technical Report: 90-5, School of Computing Science, Simon Fraser University, April 1990.**[Shermer, 1990b]**

Author(s): Thomas Shermer.

Title: . Recent results in art galleries.

Technical Report: 90-10, School of Computing Science, Simon Fraser University, October 1990.

Abstract: Two points in a polygon are called visible if the straight line segment between them lies entirely inside the polygon. The art gallery problem for a polygon P is to find a minimum set of points G in P such that every point of P is visible from some point of G. This problem has been shown to be NP-hard by Lee and Lin [65]. However, Chvatal showed that the number of points of G will never exceed bn=3c for a simple polygon of n sides [17]. This latter result is referred to as the art gallery theorem. Many variations on the art gallery problem have been studied, and work in this area has accelerated after the publication of the monograph of O'Rourke [86], which deals exclusively with this topic. This paper provides an introduction to art gallery theorems, and surveys the recent results of the field. The emphasis is on the results rather than the techniques. In addition, this paper examines several new problems that have the same geometric flavor as art gallery problems.**[Vogel and Popowich, 1990]**

Author(s): Carl Vogel and Fred Popowich.

Title: . Head-driven phrase structure grammar as an instance of inheritance based reasoning.

Technical Report: 90-3, School of Computing Science, Simon Fraser University, February 1990.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Atkins and Coady, 1989]**

Author(s): M. Stella Atkins and M. Yvonne Coady.

Title: . Efficient concurrent access to shared data structures.

Technical Report: 89-6, School of Computing Science, Simon Fraser University, November 1989.**[Chau and Liestman, 1989]**

Author(s): Siu-Cheung Chau and Arthur L. Liestman.

Title: . A fault-tolerant multistage interconnection network.

Technical Report: 89-1, School of Computing Science, Simon Fraser University, January 1989.**[Delgrande, 1989]**

Author(s): James P. Delgrande.

Title: . More songs about buildings and food: Collected, extended and augmented proofs for a logic of default properties and for default reasoning I.

Technical Report: 89-8, School of Computing Science, Simon Fraser University, December 1989.**[Liestman and Peters, 1989]**

Author(s): Arthur L. Liestman and Joseph G. Peters.

Title: . Minimum broadcast digraphs.

Technical Report: 89-3, School of Computing Science, Simon Fraser University, May 1989.**[Liestman and Richards, 1989]**

Author(s): Arthur L. Liestman and Dana Richards.

Title: . Edge-labelled gossip in rounds.

Technical Report: 89-5, School of Computing Science, Simon Fraser University, July 1989.**[McDonald and Peters, 1989]**

Author(s): Kenneth M. McDonald and Joseph G. Peters.

Title: . Smallest paths in simple rectilinear polygons.

Technical Report: 89-4, School of Computing Science, Simon Fraser University, July 1989.**[Morawetz, 1989]**

Author(s): Claudia Morawetz.

Title: . A high-level approach to the animation of human secondary movement.

Technical Report: 89-2, School of Computing Science, Simon Fraser University, May 1989. (M.Sc. thesis).

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Bang-Jensen and Hell, 1988]**

Author(s): Jorgen Bang-Jensen and Pavol Hell.

Title: . The effect of two cycles on the complexity of colourings by directed graphs.

Technical Report: 88-7, School of Computing Science, Simon Fraser University, July 1988.**[Bermond and Peyrat, 1988]**

Author(s): J-C. Bermond and C. Peyrat.

Title: . Broadcasting in DeBruijn networks.

Technical Report: 88-3, School of Computing Science, Simon Fraser University, June 1988.**[Bermond and Tzvieli, 1988]**

Author(s): J-C. Bermond and Dvora Tzvieli.

Title: . Minimal diameter double-loop networks: Part II: Dense optimal families.

Technical Report: 88-6, School of Computing Science, Simon Fraser University, June 1988.**[Bermond et al., 1988a]**

Author(s): J-C. Bermond, P. Hell, A. L. Liestman, and J. G. Peters.

Title: . Broadcasting in bounded degree graphs.

Technical Report: 88-5, School of Computing Science, Simon Fraser University, June 1988.**[Bermond et al., 1988b]**

Author(s): J-C. Bermond, P. Hell, A. L. Liestman, and J. G. Peters.

Title: . New minimum broadcast graphs and sparse broadcast graphs.

Technical Report: 88-4, School of Computing Science, Simon Fraser University, June 1988.**[Bruderlin, 1988]**

Author(s): Armin Bruderlin.

Title: . Goal-directed, dynamic animation of bipedal locomotion.

Technical Report: 88-10, School of Computing Science, Simon Fraser University, December 1988.**[Chau and Liestman, 1988a]**

Author(s): Siu-Cheung Chau and Arthur L. Liestman.

Title: . A fault-tolerant binary tree architecture.

Technical Report: 88-8, School of Computing Science, Simon Fraser University, December 1988.**[Chau and Liestman, 1988b]**

Author(s): Siu-Cheung Chau and Arthur L. Liestman.

Title: . A proposal for a fault-tolerant binary hypercube architecture.

Technical Report: 88-9, School of Computing Science, Simon Fraser University, December 1988.**[Hafer, 1988]**

Author(s): Lou Hafer.

Title: . Logic synthesis by mixed-integer linear programming: Implementing the constraint system.

Technical Report: 88-2, School of Computing Science, Simon Fraser University, April 1988.**[Tong, 1988]**

Author(s): Frank Tong.

Title: . Specularity removal for shape-from-shading.

Technical Report: 88-1, School of Computing Science, Simon Fraser University, April 1988.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Alspach et al., 1987]**

Author(s): B. Alspach, Jean-Claude Bermond, and D. Sotteau.

Title: . Decomposition into cycles I: Hamilton decompositions.

Technical Report: 87-9, School of Computing Science, Simon Fraser University, December 1987.**[en Xie, 1987]**

Author(s): Shun en Xie.

Title: . Incremental construction of 3-D models of a scene from sequentially planned views.

Technical Report: 87-1, School of Computing Science, Simon Fraser University, March 1987. (Ph.D. thesis).**[Hall, 1987]**

Author(s): Gary Hall.

Title: . Querying cyclic databases in natural language.

Technical Report: 87-4, School of Computing Science, Simon Fraser University, May 1987. (M.Sc. thesis).**[Kao, 1987]**

Author(s): Mimi Kao.

Title: . Turning null responses into quality responses.

Technical Report: 87-2, School of Computing Science, Simon Fraser University, March 1987. (M.Sc. thesis).**[Liestman and Peters, 1987]**

Author(s): Arthur Liestman and Joseph Peters.

Title: . Broadcast networks of bounded degree.

Technical Report: 87-8, School of Computing Science, Simon Fraser University, September 1987.**[Mahajan and Peters, 1987]**

Author(s): Sanjeev Mahajan and Joseph Peters.

Title: . Algorithms for regular properties in recursive graphs.

Technical Report: 87-7, School of Computing Science, Simon Fraser University, August 1987.**[Ridsdale, 1987]**

Author(s): Gary Ridsdale.

Title: . The Director's Apprentice: Animating figures in a constrained environment.

Technical Report: 87-6, School of Computing Science, Simon Fraser University, June 1987. (Ph.D. thesis).**[Speakman, 1987]**

Author(s): Tony Speakman.

Title: . Load sharing strategies for a distributed computer system.

Technical Report: 87-3, School of Computing Science, Simon Fraser University, April 1987. (M.Sc. thesis).**[Strzalkowski, 1987]**

Author(s): Tomek Strzalkowski.

Title: . A theory of stratified meaning representation for natural language.

Technical Report: 87-5, School of Computing Science, Simon Fraser University, June 1987. (Ph.D. thesis).

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Bhattacharya and Gindy, 1986]**

Author(s): Binay Bhattacharya and Hossam A. El Gindy.

Title: . Fast algorithms for the maximum empty rectangle problem.

Technical Report: 86-3, School of Computing Science, Simon Fraser University, March 1986.**[Bhattacharya and Toussaint, 1986]**

Author(s): Binay K. Bhattacharya and G. T. Toussaint.

Title: . Fast algorithms for computing the diameter of a finite planar set.

Technical Report: 86-1, School of Computing Science, Simon Fraser University, 1986.**[Bhattacharya and Zorbas, 1986]**

Author(s): Binay Bhattacharya and John Zorbas.

Title: . Solving the two-dimensional findpath problem using a line-triangle representation of the robot.

Technical Report: 86-7, School of Computing Science, Simon Fraser University, April 1986.**[Dahl, 1986]**

Author(s): Veronica Dahl.

Title: . Gramaticas discontinuas [Spanish].

Technical Report: 86-8, School of Computing Science, Simon Fraser University, 1986. (Also appears Revista Argentina de Linguistica, 1986).**[Delgrande, 1986]**

Author(s): Jim Delgrande.

Title: . Logics for natural kinds.

Technical Report: 86-6, School of Computing Science, Simon Fraser University, 1986. (Superceded by LCCR TR 86-15).**[Erdos et al., 1986]**

Author(s): P. Erdos, Pavol Hell, and P. Winkler.

Title: . Bandwidth versus bandsize.

Technical Report: 86-5, School of Computing Science, Simon Fraser University, 1986.**[Fu and Peters, 1986]**

Author(s): Ada Fu and Joseph G. Peters.

Title: . On the parallel complexity of bipartite matching.

Technical Report: 86-14, School of Computing Science, Simon Fraser University, 1986. DOES NOT EXIST.**[Hadley, 1986]**

Author(s): Robert Hadley.

Title: . Two solutions to logical omniscience: a critique with an alternative.

Technical Report: 85-21, School of Computing Science, Simon Fraser University, 1986. (Published jointly as LCCR TR 86-3).**[Hell and Liestman, 1986]**

Author(s): Pavol Hell and Arthur L. Liestman.

Title: . Broadcasting in one dimension.

Technical Report: 86-2, School of Computing Science, Simon Fraser University, 1986.**[Hell and Nesetril, 1986]**

Author(s): Pavol Hell and Jaroslav Nesetril.

Title: . On the complexity of H-coloring.

Technical Report: 86-4, School of Computing Science, Simon Fraser University, 1986.**[Lee and Reghbati, 1986a]**

Author(s): A. Y. C. Lee and H. K. Reghbati.

Title: . Effectiveness of the regional rasterization technique: A quantitative study.

Technical Report: 86-12, School of Computing Science, Simon Fraser University, 1986.**[Lee and Reghbati, 1986b]**

Author(s): A. Y. C. Lee and H. K. Reghbati.

Title: . Regional rasterization: a technique for cost-effective graphics display design.

Technical Report: 86-11, School of Computing Science, Simon Fraser University, 1986.**[Lee and Reghbati, 1986c]**

Author(s): A. Y. C. Lee and H. K. Reghbati.

Title: . Regional rasterization: Implementation aspects and case studies.

Technical Report: 86-13, School of Computing Science, Simon Fraser University, 1986.**[Liestman and Peters, 1986]**

Author(s): Arthur L. Liestman and Joseph G. Peters.

Title: . Distributed algorithms.

Technical Report: 86-10, School of Computing Science, Simon Fraser University, June 1986.**[Merks, 1986]**

Author(s): Ed Merks.

Title: . An optimal parallel algorithm for triangulating a set of points in the plane.

Technical Report: 86-9, School of Computing Science, Simon Fraser University, April 1986.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Abadir and Reghbati, 1985]**

Author(s): M. S. Abadir and H. K. Reghbati.

Title: . Functional test generation using binary decision diagrams.

Technical Report: 85-15, School of Computing Science, Simon Fraser University, 1985.**[Adolph et al., 1985a]**

Author(s): W. S. Adolph, H. K. Reghbati, and A. Sanmugasunderam.

Title: . Application of AI techniques to VLSI analysis and testing.

Technical Report: 85-14, School of Computing Science, Simon Fraser University, 1985.**[Adolph et al., 1985b]**

Author(s): W. S. Adolph, H. K. Reghbati, and A. Sanmugasunderam.

Title: . Application of AI techniques to VLSI design.

Technical Report: 85-12, School of Computing Science, Simon Fraser University, 1985.**[Adolph et al., 1985c]**

Author(s): W. S. Adolph, H. K. Reghbati, and A. Sanmugasunderam.

Title: . A frame-based system for representing knowledge about VLSI design.

Technical Report: 85-13, School of Computing Science, Simon Fraser University, 1985.**[Bhattacharya and Gindy, 1985]**

Author(s): Binay K. Bhattacharya and Hossam A. El Gindy.

Title: . An efficient algorithm for an intersection problem with an application.

Technical Report: 85-4, School of Computing Science, Simon Fraser University, May 1985.**[Cameron and Wu, 1985]**

Author(s): Robert D. Cameron and Joseph Wu.

Title: . Pascal MPS version 2—supplementary reference manual.

Technical Report: 85-20, School of Computing Science, Simon Fraser University, September 1985.**[Dahl, 1985]**

Author(s): Veronica Dahl.

Title: . Logic-based metagrammars for natural language analysis.

Technical Report: 85-1, School of Computing Science, Simon Fraser University, 1985.**[Hadley, 1985]**

Author(s): Robert F. Hadley.

Title: . Goedel, Lucas, and mechanical models of the mind.

Technical Report: 85-2, School of Computing Science, Simon Fraser University, 1985. (Published jointly as LCCR TR 85-5).**[Johnston and Reghbati, 1985]**

Author(s): B. Johnston and H. K. Reghbati.

Title: . On an inductive inference approach to algorithmic program synthesis.

Technical Report: 85-5, School of Computing Science, Simon Fraser University, 1985. (Published jointly as LCCR TR 85-9).**[Lee and Reghbati, 1985a]**

Author(s): A. Y. C. Lee and H. K. Reghbati.

Title: . Performance of scan-line and symmetric mapping schemes for raster scan graphic systems.

Technical Report: 85-6, School of Computing Science, Simon Fraser University, 1985.**[Lee and Reghbati, 1985b]**

Author(s): A. Y. C. Lee and H. K. Reghbati.

Title: . A real-time scan-engine for shaded polygons.

Technical Report: 85-7, School of Computing Science, Simon Fraser University, 1985.**[Leung and Reghbati, 1985a]**

Author(s): H. K. N. Leung and H. K. Reghbati.

Title: . Comments on data flow anomaly detection.

Technical Report: 85-9, School of Computing Science, Simon Fraser University, 1985.**[Leung and Reghbati, 1985b]**

Author(s): H. K. N. Leung and H. K. Reghbati.

Title: . Comments on program slicing.

Technical Report: 85-10, School of Computing Science, Simon Fraser University, 1985.**[Leung and Reghbati, 1985c]**

Author(s): H. K. N. Leung and H. K. Reghbati.

Title: . Detection of infeasible paths using program instrumentation.

Technical Report: 85-18, School of Computing Science, Simon Fraser University, 1985.**[Luk, 1985]**

Author(s): WoShun Luk.

Title: . Syntax-based query optimization and its application to SQL.

Technical Report: 85-16, School of Computing Science, Simon Fraser University, 1985.**[Luk and Kloster, 1985]**

Author(s): WoShun Luk and Steve Kloster.

Title: . ELFS: English language from SQL.

Technical Report: 85-8, School of Computing Science, Simon Fraser University, 1985.**[Osiakwan, 1985]**

Author(s): Constantine Osiakwan.

Title: . Efficient implementations of the primal-dual method.

Technical Report: 85-22, School of Computing Science, Simon Fraser University, 1985.**[Peters, 1985a]**

Author(s): Joseph G. Peters.

Title: . A graduate course on algorithmic aspects of matroids and subset systems.

Technical Report: 85-17, School of Computing Science, Simon Fraser University, July 1985.**[Peters, 1985b]**

Author(s): Joseph G. Peters.

Title: . Lower bounds on time-accuracy tradeoffs for the 0-1 knapsack problem.

Technical Report: 85-19, School of Computing Science, Simon Fraser University, October 1985.**[Popowich, 1985]**

Author(s): Fred Popowich.

Title: . The SAUMER user's manual.

Technical Report: 85-3, School of Computing Science, Simon Fraser University, 1985. (Published jointly as LCCR TR 85-4).**[Reghbati, 1985]**

Author(s): H. K. Reghbati.

Title: . Fault detection in programming logic arrays.

Technical Report: 85-11, School of Computing Science, Simon Fraser University, 1985.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Bollobas and Hell, 1984]**

Author(s): B. Bollobas and Pavol Hell.

Title: . Sorting and graphs.

Technical Report: 84-3, School of Computing Science, Simon Fraser University, 1984.**[Cameron, 1984]**

Author(s): Robert D. Cameron.

Title: . Generalized map iterators in Lisp.

Technical Report: 84-1, School of Computing Science, Simon Fraser University, January 1984.**[Chau and Liestman, 1984a]**

Author(s): Siu-Cheung Chau and Arthur L. Liestman.

Title: . Methods to construct fault-tolerant minimal broadcast networks.

Technical Report: 84-15, School of Computing Science, Simon Fraser University, 1984.**[Chau and Liestman, 1984b]**

Author(s): Siu-Cheung Chau and Arthur L. Liestman.

Title: . New methods to construct minimal broadcast networks.

Technical Report: 84-8, School of Computing Science, Simon Fraser University, 1984.**[Cheung and Kameda, 1984]**

Author(s): David Cheung and Tiko Kameda.

Title: . Site-optimal termination protocols for a distributed database system under network partitioning.

Technical Report: 84-14, School of Computing Science, Simon Fraser University, September 1984. (Published jointly as LCCR TR 84-6).**[Dahl, 1984a]**

Author(s): Veronica Dahl.

Title: . Hiding complexity from the casual writer of parsers.

Technical Report: 84-19, School of Computing Science, Simon Fraser University, 1984. (In: Natural Language Understanding and Logic Programming, North-Holland 1985).**[Dahl, 1984b]**

Author(s): Veronica Dahl.

Title: . Logic programming for constructive expert database systems.

Technical Report: 84-9, School of Computing Science, Simon Fraser University, 1984. (Reprint: 'Expert Database Systems', Larry Kerschberg (Editor), Benjamin/Cummings Publishing, pp. 209-217.).**[Dahl, 1984c]**

Author(s): Veronica Dahl.

Title: . More on gapping grammars.

Technical Report: 84-7, School of Computing Science, Simon Fraser University, 1984. (Proceedings of the International Conference on 5th Generation Computer Systems, 1984, pp 669-677).**[Dahl and Abramson, 1984]**

Author(s): Veronica Dahl and Harvey Abramson.

Title: . On gapping grammars.

Technical Report: 84-5, School of Computing Science, Simon Fraser University, 1984.**[Dahl and McCord, 1984]**

Author(s): Veronica Dahl and Michael C. McCord.

Title: . Treating coordination in logic grammars.

Technical Report: 84-6, School of Computing Science, Simon Fraser University, 1984. (Reprint: American Journal of Computational Linguistics, v9 No. 2 (Apr-Jun 1983) pp. 69-91).**[Hadley, 1984]**

Author(s): Robert Hadley.

Title: . SHADOW: A natural language query analyser.

Technical Report: 84-13, School of Computing Science, Simon Fraser University, 1984. (Published jointly as LCCR TR 84-3).**[Hobson et al., 1984]**

Author(s): Richard Hobson, J. Gudaitis, and J. Thornburg.

Title: . A new machine model for high level language interpretation.

Technical Report: 84-18, School of Computing Science, Simon Fraser University, 1984. (Published jointly as LCCR TR 84-8).**[Katoh et al., 1984]**

Author(s): Naoki Katoh, T. Ibaraki, and Tiko Kameda.

Title: . Cautious transaction schedulers with admission control.

Technical Report: 84-2, School of Computing Science, Simon Fraser University, 1984. (Published jointly as LCCR TR 84-1).**[Luk, 1984]**

Author(s): WoShun Luk.

Title: . An optimal algorithm to perform set intersections and/or unions on a broadcast network.

Technical Report: 84-12, School of Computing Science, Simon Fraser University, 1984. (Published jointly as LCCR TR 84-5).**[Martin, 1984]**

Author(s): Fred Martin.

Title: . Towards the automated evaluation of interactive systems: a quality-assessment approach.

Technical Report: 84-11, School of Computing Science, Simon Fraser University, 1984. (M.Sc. Thesis; Published jointly as LCCR TR 84-4).**[Popowich, 1984]**

Author(s): Fred Popowich.

Title: . SAUMER: Sentence analysis using meta-rules.

Technical Report: 84-10, School of Computing Science, Simon Fraser University, August 1984. (Published jointly as LCCR TR 84-2).**[Richards and Liestman, 1984]**

Author(s): Dana Richards and Arthur L. Liestman.

Title: . Generalizations of broadcasting and gossiping.

Technical Report: 84-17, School of Computing Science, Simon Fraser University, 1984.**[Strzalkowski, 1984]**

Author(s): Tomek Strzalkowski.

Title: . On the complexity of context-sensitive languages.

Technical Report: 84-4, School of Computing Science, Simon Fraser University, April 1984.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Abadir and Reghbati, 1983a]**

Author(s): M. S. Abadir and H. K. Reghbati.

Title: . Functional specification and testing of logic circuits.

Technical Report: 83-7, School of Computing Science, Simon Fraser University, 1983.**[Abadir and Reghbati, 1983b]**

Author(s): M. S. Abadir and H. K. Reghbati.

Title: . Functional test generation for LSI circuits described by binary decision diagrams.

Technical Report: 83-5, School of Computing Science, Simon Fraser University, 1983.**[Bhattacharya, 1983]**

Author(s): Binay Bhattacharya.

Title: . An algorithm for computing order-k Voronoi diagrams in the plane.

Technical Report: 83-9, School of Computing Science, Simon Fraser University, 1983.**[Cercone and McCalla, 1983]**

Author(s): Nick Cercone and Gordon McCalla.

Title: . Artificial intelligence: Underlying assumptions and basic objectives.

Technical Report: 83-13, School of Computing Science, Simon Fraser University, December 1983.**[Cercone et al., 1983a]**

Author(s): Nick Cercone, John Boates, and Max Krause.

Title: . A semi-interactive system for finding perfect hash functions.

Technical Report: 83-4, School of Computing Science, Simon Fraser University, 1983.**[Cercone et al., 1983b]**

Author(s): Nick Cercone, Robert Hadley, and Tomek Strzalkowski.

Title: . The automated academic advisor: an introduction and initial experiences.

Technical Report: 83-11, School of Computing Science, Simon Fraser University, November 1983.**[Dahl, 1983a]**

Author(s): Veronica Dahl.

Title: . Current trends in logic grammars.

Technical Report: 83-2, School of Computing Science, Simon Fraser University, 1983.**[Dahl, 1983b]**

Author(s): Veronica Dahl.

Title: . Logic programming as a representation of lnowledge.

Technical Report: 83-1, School of Computing Science, Simon Fraser University, 1983. (Reprint: IEEE Computer, October 1983, pp. 106-111).**[Dahl, 1983c]**

Author(s): Veronica Dahl.

Title: . Teoria de lenguajes (in spanish).

Technical Report: 83-6, School of Computing Science, Simon Fraser University, 1983. (Primer Simposio LatinoAmericano de Informatica).**[Hell and Kirkpatrick, 1983]**

Author(s): Pavol Hell and D. G. Kirkpatrick.

Title: . Packing by cliques and by finite families of graphs.

Technical Report: 83-8, School of Computing Science, Simon Fraser University, 1983.**[Ibaraki and Kameda, 1983]**

Author(s): Toshihide Ibaraki and Tiko Kameda.

Title: . Multiversion vs. single version serializability.

Technical Report: 83-14, School of Computing Science, Simon Fraser University, 1983. (Published jointly as LCCR TR 83-1).**[Richards and Liestman, 1983]**

Author(s): Dana Richards and Arthur L. Liestman.

Title: . Finding cycles of a given length.

Technical Report: 83-3, School of Computing Science, Simon Fraser University, 1983. (Reprint: Annals of Discrete Mathematics, 27 (1985) 249-256).**[Strzalkowski, 1983]**

Author(s): Tomek Strzalkowski.

Title: . ENGRA: Yet another parser for English.

Technical Report: 83-10, School of Computing Science, Simon Fraser University, October 1983.**[Weinkam, 1983]**

Author(s): James J. Weinkam.

Title: . METAL: a metalanguage for defining top down recognizers.

Technical Report: 83-12, School of Computing Science, Simon Fraser University, 1983.

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Abadir and Reghbati, 1982a]**

Author(s): M. S. Abadir and H. K. Reghbati.

Title: . Functional testing of semiconductor random access memories.

Technical Report: 82-10, School of Computing Science, Simon Fraser University, 1982.**[Abadir and Reghbati, 1982b]**

Author(s): M. S. Abadir and H. K. Reghbati.

Title: . Testability consideration in hardware description techniques.

Technical Report: 82-9, School of Computing Science, Simon Fraser University, 1982.**[Bhattacharya, 1982]**

Author(s): Binay Bhattacharya.

Title: . Application of computational geometry to pattern recognition problems.

Technical Report: 82-3, School of Computing Science, Simon Fraser University, 1982. (McGill Ph.D. Thesis).**[Black and Luk, 1982]**

Author(s): Patrick Black and W. S. Luk.

Title: . A new heuristic for generating semi-join programs for distributed query processing.

Technical Report: 82-4, School of Computing Science, Simon Fraser University, 1982.**[Cameron and Ito, 1982]**

Author(s): Robert D. Cameron and M. Robert Ito.

Title: . Grammar-based definition of metaprogramming systems.

Technical Report: 82-14, School of Computing Science, Simon Fraser University, October 1982. (Superceded by: ACM Transactions on Programming Languages and Systems, v6 No.1 (Jan 1984) pp. 20-54).**[Farrag and Kameda, 1982]**

Author(s): Abdel Aziz Farrag and Tiko Kameda.

Title: . On concurrency control using multiple versions.

Technical Report: 82-13, School of Computing Science, Simon Fraser University, 1982.**[Graham and Hell, 1982]**

Author(s): R. L. Graham and Pavol Hell.

Title: . On the history of the minimum spanning tree problem.

Technical Report: 82-5, School of Computing Science, Simon Fraser University, 1982. (Reprint: Annals of the History of Computing, v7 No.1 (Jan 1985) pp. 43-57).**[Hell and Kirkpatrick, 1982]**

Author(s): Pavol Hell and D. G. Kirkpatrick.

Title: . Star factors and star packings.

Technical Report: 82-6, School of Computing Science, Simon Fraser University, 1982.**[Hobson, 1982]**

Author(s): Richard F Hobson.

Title: . A directly executable encoding for APL.

Technical Report: 82-1, School of Computing Science, Simon Fraser University, 1982.**[Hobson and Snyder, 1982]**

Author(s): Richard F. Hobson and Warren S. Snyder.

Title: . A sesktop CMOS development system.

Technical Report: 82-11, School of Computing Science, Simon Fraser University, 1982. (Superceded).**[Ibaraki and Kameda, 1982]**

Author(s): Toshihide Ibaraki and Tiko Kameda.

Title: . On the optimal nesting order for computing N-relational joins.

Technical Report: 82-15, School of Computing Science, Simon Fraser University, December 1982. (Reprint: ACM Transactions on Database Systems, v9 No. 3 (Sept 1984) pp. 482-502).**[Ibaraki et al., 1982]**

Author(s): Toshihide Ibaraki, Tiko Kameda, and T. Minoura.

Title: . DITS and its applications: Serializability theory made simple.

Technical Report: 82-12, School of Computing Science, Simon Fraser University, December 1982.**[Liestman and Richards, 1982]**

Author(s): Arthur L. Liestman and Dana Richards.

Title: . Toward optimal gossiping schemes with conference calls.

Technical Report: 82-7, School of Computing Science, Simon Fraser University, 1982. (Reprint: Discrete Applied Mathematics, v7 (1984) pp. 183-189).**[Luk, 1982]**

Author(s): WoShun Luk.

Title: . Performing set intersections and/or unions in a broadcast network.

Technical Report: 82-8, School of Computing Science, Simon Fraser University, 1982.**[Muro et al., 1982]**

Author(s): Shojiro Muro, Tiko Kameda, and T. Minoura.

Title: . Multi-version concurrency control scheme for a database system.

Technical Report: 82-2, School of Computing Science, Simon Fraser University, 1982. (Reprint: Journal of Computer and System Sciences, Volume 29, number 2, pp. 207-224, October, 1984).

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Cercone and Goebel, 1981]**

Author(s): Nick Cercone and Randy Goebel.

Title: . Knowledge representation and data bases: Science or engineering.

Technical Report: 81-15, School of Computing Science, Simon Fraser University, 1981.**[Funt, 1981]**

Author(s): Brian V. Funt.

Title: . A parallel-process model of mental rotation.

Technical Report: 81-8, School of Computing Science, Simon Fraser University, 1981.**[Haggkvist and Hell, 1981]**

Author(s): R. Haggkvist and Pavol Hell.

Title: . Sorting and merging in rounds.

Technical Report: 81-9, School of Computing Science, Simon Fraser University, 1981.**[Hedrlin et al., 1981]**

Author(s): Z. Hedrlin, Pavol Hell, and C. S. Ko.

Title: . Homomorphism interpolation and approximation.

Technical Report: 81-3, School of Computing Science, Simon Fraser University, 1981.**[Hell and Rosenfeld, 1981]**

Author(s): Pavol Hell and Moshe Rosenfeld.

Title: . The complexity of finding generalized paths in tournaments.

Technical Report: 81-5, School of Computing Science, Simon Fraser University, 1981.**[Hell and Speer, 1981]**

Author(s): Pavol Hell and E. R. Speer.

Title: . Matroids with weighted bases and Feynman integrals.

Technical Report: 81-4, School of Computing Science, Simon Fraser University, 1981.**[Hobson, 1981]**

Author(s): Richard F Hobson.

Title: . Architecture considerations for a DEL oriented arithmetic processor unit.

Technical Report: 81-14, School of Computing Science, Simon Fraser University, 1981.**[Hobson et al., 1981]**

Author(s): Richard F Hobson, Patrick Hannon, and Jonathan Thornburg.

Title: . High-level microprogramming with APL syntax.

Technical Report: 81-2, School of Computing Science, Simon Fraser University, 1981.**[Krause et al., 1981]**

Author(s): Max Krause, Nick Cercone, and John Boates.

Title: . Perfect hash function search with application to natural language systems.

Technical Report: 81-6, School of Computing Science, Simon Fraser University, 1981.**[Luk, 1981a]**

Author(s): W. S. Luk.

Title: . On estimating block accesses in database organizations.

Technical Report: 81-10, School of Computing Science, Simon Fraser University, 1981.**[Luk, 1981b]**

Author(s): W. S. Luk.

Title: . On methodologies of computer simulations to evaluate ridge estimators.

Technical Report: 81-1, School of Computing Science, Simon Fraser University, 1981.**[Sterling, 1981]**

Author(s): Theodor D. Sterling.

Title: . Computerized management systems and democracy.

Technical Report: 81-11, School of Computing Science, Simon Fraser University, May 1981.**[Strothotte, 1981]**

Author(s): Thomas Strothotte.

Title: . Fast raster graphics using parallel processing.

Technical Report: 81-12, School of Computing Science, Simon Fraser University, June 1981. (M.Sc. Thesis).

Jump to year: | 2011
>> 2010
>> 2009 >> 2008 >> 2007 >> 2006 >> 2005 >> 2004 >> 2003 >> 2002 >> 2001 >> 2000 >> 1999 >> 1998 >> 1997 >> 1996 >> 1995 >> 1994 >> 1993 >> 1992 >> 1991 >> 1990 >> 1989 >> 1988 >> 1987 >> 1986 >> 1985 >> 1984 >> 1983 >> 1982 >> 1981 >> 1980 |

**[Hobson, 1980]**

Author(s): Richard F. Hobson.

Title: . Software sympathetic chip set design.

Technical Report: 80-3, School of Computing Science, Simon Fraser University, 1980.**[Luk and Luk, 1980]**

Author(s): W. S. Luk and Lydia Luk.

Title: . Optimizing query processing strategies in a distributed database system.

Technical Report: 80-4, School of Computing Science, Simon Fraser University, 1980.

8888 University Drive, Burnaby, B.C. Canada V5A 1S6 |
Terms / Privacy |
Contact CS |
Webmaster |
SiteMap |
Road Conditions |
© Simon Fraser University