Geometry.Net - the online learning center
Home  - Computer - Parallel Computing
e99.com Bookstore
  
Images 
Newsgroups
Page 6     101-120 of 134    Back | 1  | 2  | 3  | 4  | 5  | 6  | 7  | Next 20

         Parallel Computing:     more books (100)
  1. Optimizing Supercompilers for Supercomputers (Research Monographs in Parallel and Distributed Computing) by Michael Wolfe, 1989-03-20
  2. Parallel Computing Using the Prefix Problem by S. Lakshmivarahan, Sudarshan K. Dhall, 1994-07-21
  3. Languages and Compilers for Parallel Computing 18th International Workshop, LCPC 2005; Hawthorne, NY, USA, October 2005 by Eduard; Gerald Baumgartner; J. Ramanujam; Et. Al. Ayguadé, 2005
  4. Languages and Compilers for Parallel Computing: 6th International Workshop, Portland, Oregon, USA, August 12 - 14, 1993. Proceedings (Lecture Notes in Computer Science)
  5. Parallel Computing on Distributed Memory Multiprocessors (NATO ASI Series / Computer and Systems Sciences)
  6. Implementation of Non-Strict Functional Programming Languages (Research Monographs in Parallel and Distributed Computing) by Kenneth R. Traub, 1991-03-07
  7. Applied Parallel Computing. Industrial Computation and Optimization: Third International Workshop, PARA '96, Lyngby, Denmark, August 18-21, 1996, Proceedings (Lecture Notes in Computer Science)
  8. Distributed and Parallel Computing: 6th International Conference on Algorithms and Architectures for Parallel Processing, ICA3PP, Melbourne, Australia, ... Computer Science and General Issues)
  9. Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (Studies in Computational Intelligence)
  10. Parallel Computing Using Optical Interconnections (The Springer International Series in Engineering and Computer Science)
  11. Advances in Randomized Parallel Computing (Combinatorial Optimization)
  12. Advances in Computing Techniques: Algorithms, Databases and Parallel Processing : Jsps-Nus Seminar on Computing, National University of Singapore, 5-7 December 1994
  13. Parallel Computing: Methods, Algorithms and Applications
  14. Practical Parallel Computing: Status and Prospects (Concurrency, Practice and Experience, Vol 3, Issue 6) by Paul Messina, 1992-02

101. Pooch Application
Graphical software for distributing jobs to a network of Macintoshes.
http://daugerresearch.com/pooch/
Home About Products Vault ... Press
You are entering... The Future of Parallel Computing Loading . . . Click to continue

102. MathWorks United Kingdom - Parallel Computing - MATLAB & Simulink Solutions
MathWorks parallel computing products along with MATLAB and Simulink enable you to perform largescale simulations and data processing tasks using multicore desktops, clusters
http://www.mathworks.co.uk/parallel-computing/
var host_pre = "http://www"; Home United Kingdom Contact Us Store Search Create Account Log In Solutions Academia ... Events
Parallel Computing
Max Planck Institute of Biochemistry
Parallel Computing Toolbox enabled us to speed up our processing by 20 to 30 times. We were able to use our cluster productively from the MATLAB environment without having to be experts in parallel programming or having to learn another programming language. Read the story
Related Products
Perform large-scale computations using multicore desktops, GPUs, clusters, grids, and clouds
Large-scale simulations and data processing tasks that support engineering and scientific activities such as mathematical modeling algorithm development , and testing can take an unreasonably long time to complete or require a lot of computer memory. You can speed up these tasks by taking advantage of high-performance computing resources, such as multicore computers, GPUs, computer clusters, and grid and cloud computing services. MathWorks parallel computing products let you use these resources from MATLAB and Simulink without making major changes to your computing environment and workflows. Using parallel computing products, you can:

103. Pypvm Home Page
PVM bindings for Python. Source code.
http://pypvm.sourceforge.net/
pypvm home page
Overview
pypvm is a Python module which allows interaction with the Parallel Virtual Machine (PVM) package. PVM allows a collection of computers connected by a network to serve as a single parallel computer. More information about the PVM package can be found at http://www.epm.ornl.gov/pvm Pypvm is intended to be an educational and prototyping tool. The primary authors of Pypvm are W. Michael Petullo, who can be contacted at pypvm@flyn.org , and Greg Baker, who can be contacted at gregb@ifost.org.au , and Sam Tannous, who can be contacted at stannous@cisco.com . There is just one mailing-list at the moment, which is hardly ever used: pypvm-discuss@lists.sourceforge.net
Download
It is available via HTTP from pypvm.sourceforge.net/pypvm-0.94.tar.gz , and prdownloads.sourceforge.net/pypvm/pypvm-0.94.tar.gz The usual story applies: gunzip and tar xvf the file, or use whatever tools read .tar.gz files on your operating system. Then read the INSTALL and README files, which should be sufficiently informative. It has only been tested on Linux/x86 and Linux/PPC, although there is no reason to expect it wouldn't work on anything else.

104. CS 838 - Topics In Parallel Computing - Spring 1999
There is a couple of books on parallel algorithms and parallel computing you might find useful as a supplementary source of information, but in no case you
http://pages.cs.wisc.edu/~tvrdik/cs838.html
CS 838: Topics in Parallel Computing
Spring 1999
UNIVERSITY OF WISCONSIN-MADISON
Computer Sciences Department
Administrative details
Instructor: Pavel Tvrdik email: tvrdik@cs.wisc.edu Office: CS 6376 Phone: Office hours: Tuesday/Thursday 9:30-11:00 a.m. or by appointment Lecture times: Tuesday/Thursday 08:00-09:15 a.m. Classroom: 1221 Computer Sciences
The contents
  • Syllabus
  • Schedule and materials
  • Optional books
  • Course requirements ...
  • Grading policy
    The syllabus of the lectures
    The aim of the course is to introduce you into the art of designing efficient and analyzing parallel algorithms for both shared-memory and distributed memory machines. It is structured into four major parts.
    The first part of the course is a theoretical introduction into the field of design and analysis of parallel algorithms. We will explain the metrics for measuring the quality and performance of parallel algorithms with the emphasis on scalability and isoefficiency. To prepare the framework for parallel complexity theory, we will introduce a fundamental model, the PRAM model. Then we will introduce the basics of parallel complexity theory to provide a formal framework for explaining why some problems are easier to parallelize then some others. More specifically, we will study NC-algorithms and P-completeness.
    The second part of the course will deal with communication issues of distributed memory machines. Processors in a distributed memory machine need to communicate to overcome the fact that there is no global common shared storage and that all the information is scattered among processors' local memories. First we survey interconnection topologies and communication technologies, their structural and computational properties, embeddings and simulations among them. All this will form a framework for studying interprocessor communication algorithms, both point-to-point and collective communication operations. We will concentrate mainly on orthogonal topologies, such as hypercubes, meshes, and tori, and will study basic routing algorithms, permutation routing, and one-to-all as well as all-to-all communication operation algorithms. We conclude with some more realistic abstract models for distributed memory parallel computations.
  • 105. Parallel Computing Tutorial
    Parallel computing tutorial concepts and practice of parallel programming and supercomputing. Taught at corporations, conferences, and research centers
    http://www.eecs.umich.edu/~qstout/tut/
    Parallel Computing 101
    Quentin F. Stout
    Christiane Jablonowski

    Parallel computers are easy to build it's the software that takes work. See us at Supercomputing SC10 , in New Orleans.
    Our tutorial is on Sunday, November 14.
    Abstract
    This tutorial provides a comprehensive overview of parallel computing, emphasizing those aspects most relevant to the user. It is suitable for new or prospective users, managers, students, and anyone seeking a general overview of parallel computing. It discusses software and hardware, with an emphasis on standards, portability, and systems that are commercially or freely available. Systems examined include clusters and tightly integrated supercomputers. The tutorial provides training in parallel computing concepts and terminology, and uses examples selected from large-scale engineering, scientific, and data intensive applications. These real-world examples are targeted at distributed memory systems using MPI, shared memory systems using OpenMP, and hybrid systems that combine the MPI and OpenMP programming paradigms. The tutorial shows basic parallelization approaches and discusses some of the software engineering aspects of the parallelization process, including the use of state-of-the-art tools. The tools introduced range from parallel debugging tools to performance analysis and tuning packages. We use large-scale projects as examples to help the attendees understand the issues involved in developing efficient parallel programs. Examples include: crash simulation (a complex distributed memory application parallelized for Ford Motor); climate modeling (an application highlighting distributed, shared, and vector computing ideas with examples from NCAR, NASA and ECMWF); space weather prediction (an adaptive mesh code scaling to well over 1000 processors, funded by NASA/DoD/NSF); and design of ethical clinical trials (a memory-intensive application funded by NSF). Attendees find the lessons convincing, and occasionally humourous, because we discuss mistakes as well as successes.

    106. PVM++: A C++-Library For PVM
    A C++ library for PVM.
    http://pvm-plus-plus.sourceforge.net
    PVM++: A C++-Library for PVM
    This library provides an easy way to program the widely used parallel programming library PVM , which works in homogenous and heterogenous network environments.
    Table of Contents
    Features Download News License ... Final comments
    Features
    • Easy sending and receiving of messages in heterogenous networks. Full STL-integration. Easy installation with configure -script on all UN*X platforms. Easy access to all task and host information. Message handlers are supported. Messages can be automatically unpacked on arrival. Released under the LGPL (GNU Lesser General Public License).
    Download
    Download the latest version pvm++-0.6.0.tar.gz (201 KiB) . See below for older versions and the differences between them.
    News
    • Now only hosted at SourceForge. I shut down the german mirror, because I'm going to leave the hosting university this year.
      A reference in pdf format has been added.
      SourceForge discontinues FTP support. From now on only HTTP downloads are possible. If that is a problem for you, please e-mail me
    pvm++-0.6.0.tar.gz

    107. Pragmatic Parallel Computing
    File Format PDF/Adobe Acrobat Quick View
    http://www.stat.washington.edu/hana/parallel/snowFT-doc.pdf

    108. The Aggregate: FNN, Flat Neighborhood Networks
    A network topology designed to minimize latency in clusters of PCs.
    http://aggregate.org/FNN/
    FNN: Flat Neighborhood Networks
    Welcome to the home of FNN documents and software! Since the press releases on our clusters and , which have Flat Neighborhood Networks, many of you have been asking us for more information, access to the GA (genetic search algorithm) we developed to design FNNs, etc. This is where everything will be posted....
    What is a FNN?
    A FNN is a network that guarantees single-switch latency and full link bandwidth per PE (Processing Element) pair on a wide variety of parallel communication patterns. It does so using a flat topology of PEs connected to switches, where each switch forms a neighborhood of tightly coupled PEs. The switches are nominally connected only to PEs and not to each other. They key is that each PE in a FNN is connected to several switches, and is thus a member of several neighborhoods. The overall effect is that PE pairs can communicate with single switch latency and full link bandwidth across the entire machine, even though each individual neighborhood does not encompass the entire machine. There are two classes of FNNs based on which pairs of PEs have guaranteed latency and bandwidth:
    • Universal FNNs guarantee the latency and bandwidth for all PE pairs.

    109. The Beowulf Archives
    Mailing list for Beowulf discussion.
    http://www.beowulf.org/pipermail/beowulf/
    Overview Community Archive Tools ... Showcase Archives
    - Beowulf

    - Beowulf Announce

    - Scyld-users

    - Beowulf on Debian
    The Beowulf Archives
    Many of your questions may have already been answered in earlier discussions or in the FAQ . The search results page will indicate current discussions as well as past list serves, articles, and papers. Search You can get more information about this list Archive View by: Downloadable version October 2010: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 688 KB ] September 2010: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 126 KB ] August 2010: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 94 KB ] July 2010: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 301 KB ] June 2010: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 218 KB ] May 2010: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 129 KB ] April 2010: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 258 KB ] March 2010: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 305 KB ] February 2010: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 430 KB ] January 2010: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 347 KB ] December 2009: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 385 KB ] November 2009: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 315 KB ] October 2009: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 575 KB ] September 2009: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 581 KB ] August 2009: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 447 KB ] July 2009: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 183 KB ] June 2009: [ Thread ] [ Subject ] [ Author ] [ Date ] ... [ Text 394 KB ] May 2009:

    110. Slashdot | Feature:Beowulf, Beyond The Hype
    Introduction to Beowulfs.
    http://slashdot.org/features/older/9807060858215.shtml
    Stories
    Slash Boxes

    Comments
    Slashdot
    Search
    News for nerds, stuff that matters
    Log In Nickname Password Public Terminal Create Account Retrieve Password
    Slashdot
    Stories
    Feature:Beowulf, Beyond the Hype Posted by CmdrTaco on Monday July 06, @03:58AM
    from the stuff-to-read dept.

    Michael Eilers
    has written a sort of introduction to Beowulf, what it does, what it doesn't do, and why we should care. It really is a sort of quickie distributed computing FAQ that many of you might enjoy. So hit the link below and find out. The following is a feature by Slashdot Reader Michael Eilers
    Beowulf beyond the Hype
    A Quickstart to the Beowulf Concept
    During the last weeks the Beowulf project got a lot of attention in the PC press and even on Slashdot. With Red Hat's Extreme Linux CD the relevant mailing lists show an increasing number of newbie questions. Unfortunately the informat ion Red Hat provides on their Extreme Linux web pages is less than informative and full of hype. This may result in disappointed users. It seems appropriate to make some comments an hardware and software and give some guidance for the v ery beginner. The name Beowulf stems from an old English tale and was the name of the first e xample of this class of computers. In fact a Beowulf is nothing else as a local computer network. You might say I have a small network in my flat (f.e. an old 486 connected to my newer machine), do I have a Beowulf? The answer is yes. Yo u do own already the hardware to start. Even if your connection is via PLIP/SLI P you can call your construction a Beowulf as soon as

    111. IEEE Task Force On Cluster Computing
    Links to cluster management systems, environments, software, documents, conferences, and teaching slides.
    http://www.ieeetfcc.org/

    112. Parallel Computing Category
    Discuss and ask questions about parallel computing in C++ and native code Including the Parallel Pattern Library (PPL), Asynchronous Agents Library,
    http://social.msdn.microsoft.com/Forums/en/category/parallelcomputing

    113. Grid Computing Info Centre
    An initiative to establish a global grid of computing power. Links to conferences, development, and related information.
    http://www.gridcomputing.com/
    Grid Computing Info Centre (GRID Infoware)
    The Grid Computing Information Centre (GRID Infoware: http://www.gridcomputing.com ) aims to promote the development and advancement of technologies that provide seamless and scalable access to wide-area distributed resources. Computational Grids enable the sharing, selection, and aggregation of a wide variety of geographically distributed computational resources (such as supercomputers, compute clusters storage systems , data sources, instruments, people) and presents them as a single, unified resource for solving large-scale compute and data intensive computing applications (e.g, molecular modelling for drug design brain activity analysis , and high energy physics ). This idea is analogous to electric power network (grid) where power generators are distributed, but the users are able to access electric power without bothering about the source of energy and its location. What is Grid ? How is it different from Clusters/P2P ? and so on... Please check out Grid FAQ page before asking me. Grid Info Centre Newsletter and Recent Update Messages Headlines

    114. ICPC (Imperial College Parallel Computing Centre)
    Provide College workers with dedicated access to powerful parallel computers. News and details of the facilities.
    http://www-icpc.doc.ic.ac.uk/
    Important
    Please note: most of our activities are now carried out under the guise of the London e-Science Centre . More up-to-date details of our facilities can be found on that site.
    The account application procedure below still works, other than that, this web-site is not being actively maintained. Overview of ICPC High Performance Computing Facilities ICPC's Infrastructure Development Other Research using ICPC's Resources Services Other Links
    180 Queens Gate, London SW7 2BZ
    Email: icpc@doc.ic.ac.uk

    115. Homepage Cluster&Grid Computing
    Programming projects, calls for papers, software, documentation, other resources. Chemnitz University of Technology, Germany.
    http://www.tu-chemnitz.de/informatik/RA/cchp/
    I'm sorry, but there are no special frameless pages. But it should work without that. Choose your starting point:
    Sidebar:
    It is normally located on the left side and gives you direct access to the different parts of our site. Startpage: Our titlepage and the place where news are brought to you.

    116. The History Of The Development Of Parallel Computing
    ei.cs.vt.edu/~history/Parallel.html SimilarForum on Parallel Computing CurriculaThis workshop, following the lead of the first Forum in 1995, will address this need by bringing together parallel computing researchers, faculty who teach
    http://ei.cs.vt.edu/~history/Parallel.html
    The History of the Development of Parallel Computing
    ==================================================== Gregory V. Wilson gvw@cs.toronto.edu From the crooked timber of humanity No straight thing was ever made ====================================================
    [1] IBM introduces the 704. Principal architect is Gene Amdahl; it is the first commercial machine with floating-point hardware, and is capable of approximately 5 kFLOPS.
    [2] IBM starts 7030 project (known as STRETCH) to produce supercomputer for Los Alamos National Laboratory (LANL). Its goal is to produce a machine with 100 times the performance of any available at the time. [3] LARC (Livermore Automatic Research Computer) project begins to design supercomputer for Lawrence Livermore National Laboratory (LLNL). [4] Atlas project begins in the U.K. as joint venture between University of Manchester and Ferranti Ltd. Principal architect is Tom Kilburn.
    [5] Digital Equipment Corporation (DEC) founded.
    [6] Control Data Corporation (CDC) founded.

    117. International Journal Of High Speed Computing (IJHSC)
    International Journal of High Speed Computing. Sample copy available, archives accessible to subscribers.
    http://www.worldscinet.com/ijhsc/ijhsc.shtml
    Home Contact Us Join Our Mailing List New Journals ... Advanced Search
    Print ISSN: 0129-0533
    Online ISSN: -
    About IJHSC
    Editorial Board

    Contact IJHSC

    Abstracting/Indexing

    Online Gateways
    For Authors Author Rights HOME JOURNALS BY SUBJECT COMPUTER SCIENCE ... IJHSC International Journal of High Speed Computing (IJHSC) Current Issue All Volumes (1989-2004) Announcement World Scientific has suspended publication of IJHSC. Articles from its back issues can be purchased through our Pay-Per-View service. Current Issue Volume: 12 , Issue: 1 (June 2004) TIME-PARALLEL COMPUTATION OF PSEUDO-ADJOINTS FOR A LEAPFROG SCHEME CHRISTIAN H. BISCHOF and PO-TING WU DOI No:
    Page: 1-27
    Abstract
    Full Text References A NEW FAULT-TOLERANT ROUTING ALGORITHM FOR k-ARY n-CUBE NETWORKS J. AL-SADI , K. DAY and M. OULD-KHAOUA DOI No:
    Page: 29-54
    Abstract
    Full Text References COMPARISONS OF THE PARALLEL PRECONDITIONERS FOR LARGE NONSYMMETRIC SPARSE LINEAR SYSTEMS ON A PARALLEL COMPUTER SANGBACK MA DOI No: Page: 55-68 Abstract Full Text References THE COMMUNICATION MACHINE PAUL N. SWARZTRAUBER

    118. WANG'S BOOKSHELF (Parallel Computing)
    Welcome to Jonathan Wang s Bookshelf on Parallel Computing. WELCOME IRIS.umcs. maine.edu // Local time Fri Feb 3 141636 EST 1995
    http://www.umcs.maine.edu/~shamis/wang.html
    Welcome to Jonathan Wang's Bookshelf on Parallel Computing
    WELCOME: IRIS.umcs.maine.edu // Local time: Fri Feb 3 14:16:36 EST 1995
    Why parallel computing? ( Introduction
    How fast can it be? (The top 500 super computers, in postscript, size 10M) Contents:
  • Distributed Batch Processing
  • Parallel Computing
  • Message Passing
  • Shared Object
  • Misc (yet to be sorted)
  • Super Computers/labs
  • Compiler/Parallelizer
  • Benchmarks
  • Fault Tolerance and Load Balance
  • Parallel Software
  • Comprehensive Links at Other Institutes
  • Personal Stuff
  • Mail Drop 1. Distributed Batch Processing
  • 119. The Grid: The Next-Gen Internet?
    Discussion of the Grid.
    http://www.wired.com/science/discoveries/news/2001/03/42230
    @import "/css/toolbox_article_bottom.css"; @import "/css/article.css"; @import "/css/wiredcomment.css";
    • Wired Home Subscribe Sections All Wired Product Reviews Magazine HowTo Video Science Discoveries
      The Grid: The Next-Gen Internet?
      Douglas Heingartner AMSTERDAM, Netherlands The Matrix may be the future of virtual reality, but researchers say the Grid is the future of collaborative problem-solving. More than 400 scientists gathered at the Global Grid Forum this week to discuss what may be the Internet's next evolutionary step. Though distributed computing evokes associations with populist initiatives like SETI@home, where individuals donate their spare computing power to worthy projects, the Grid will link PCs to each other and the scientific community like never before. The Grid will not only enable sharing of documents and MP3 files, but also connect PCs with sensors, telescopes and tidal-wave simulators. IBM's Brian Carpenter suggested "computing will become a utility just like any other utility."

    120. Parallel Computing Windows Style
    File Format PDF/Adobe Acrobat Quick View
    http://pheattarchive.emporia.edu/projects/GRID/art3.pdf

    Page 6     101-120 of 134    Back | 1  | 2  | 3  | 4  | 5  | 6  | 7  | Next 20

    free hit counter