|
马上注册,结交更多同行朋友,交流,分享,学习。
您需要 登录 才可以下载或查看,没有帐号?注册
x
This newsletter contains Asa Trainer's presentation on data exchange, which happened yesterday. Unfortunately the writing and editorial staff have to sleep sometime.
And I'd like to also correct a detail of fact. Leafing through the presenter biographies today, I noticed that Jim Heppelmann wasn't actually a Computervision employee. He was an employee of Windchill Technologies (which he helped co-found, in 1996), and CV had an interest in that company. When PTC acquired CV in 1998, PTC got that interest too, and the rest is history. Worth clearing up this detail, because it explains why PTC didn't know of Windchill when PTC made an offer for CV in Dec. 1997. Jim and the others at Windchill Technologies weren't on the CV payroll, so not so very visible.
Contents
- Data Exchange and Archiving
- Customize for Business Success
- Concurrent Design at Motorola
- Top-Down Design: Evolving Process
- Great Top-Down Designers of the Past
- Data Exchange and Archiving
Asa Trainer remarked this was the sixth consecutive year he has presented this same subject for PTC. And he gave his usual thorough and systematic talk, covering a wide variety of interfaces and methods. He had a good slide to show the importance of data exchange: a transmission with parts from 6 different sources, in 6 different formats (CATIA, UG, IGES, STEP, and so forth, a common hodgepodge).
At one point years ago PTC sales reps. often said the solution for data exchange was for everyone to use Pro/E. PTC has come a long way since then, and in the Nov. 2001 STEP benchmarks (organized by ProSTEP, the STEP standards group) Pro/E was highest scoring on both import and export, with 8 different other packages. Wildfire 2.0 will support STEP AP214, which is popular in Europe. Wildfire 3.0 will include GD&T support in STEP, based on Wildfire 2.0 supporting annotated features.
2D import/export now uses wizards, to give an alternative to specifying all those options in your config.pro. The import data doctor for 3D is continually being developed, so you can work on subsets of the data (usually a better approach than trying to do everything at once), or freeze certain surfaces you don't want to change in later imports, or split surfaces to solve u/v parameter problems.
Parasolid import and export (hidden in Wildfire) is improved. The CAT II interface (CATIA V4) lets you set the CATIA model size and accuracy when you export, in order to avoid problems in CATIA with Pro/E's use of relative accuracy. Believe it or not, the default for model size in CATIA is 10 meters (they must have been dreaming of airplanes), but most CATIA users set it to more like 1 or 2 meters. Wildfire 2.0 will include a new translator (a separate package, see your sales rep.) for CATIA V5, which doesn't even need a CATIA license. The geometry is better quality, but CATIA gets most of the credit, just because it has a better geometry engine on V5.
IGES still gets attention, you can export cross-hatching as an entity. And also import DXF blocks as drawing symbols, that's often the most appropriate equivalent in Pro/E to the use of blocks in AutoCAD.
Faceted models are supported now for design and context, including mass props, clearance, cross sections, and datums. STL, VRML, STEP facetted, CATIA SOLM. With DXF and ProductView for Wildfire 2.0.
For UG, there is a Granite gateway, reading up to V18, and writing V18, and uses the ATB to update changes. IDEAS is import only, reading up to V9, and no plans to export to IDEAS, if EDS isn't going to keep IDEAS. UG NX is in-house now at PTC, and may be supported on Wildfire 3.0.
Writing out to previous versions of Pro/E (Cross Release Interoperability, or CRI for short) works via an ATB netural file. But a separate file is hard to track, and generates a single import feature. Wildfire 2.0 will include identifying features in the feature tree, or from the feature tree to the model. ATB will include updates. Later improvements will include Views and Explode States. On Wildfire 3.0 we may get a different approach, using Granite. Granite has an "understanding" of the model, but it doesn't have the "recipe" to modify it.
AutobuildZ is an entirely free tool, available at the ptc.com download page, to generate 3D geometry from typical ortho/section/detail views. Extrude/Revolve/Hole/Datum are supported, and it validates the profile automatically. It will be built into Wildfire 2.0.
The name Pro/Batch is gone, replaced by Distributed Services. But it does the same job, all the data interfaces and ModelCHECK and printing/plotting.
IGES was originally developed by several companies together (like, Boeing and GE) as a way to archive CAD data securely, in a public neutral format, so it could be read anytime, regardless of versions and vendors and hardware. Then it became to be used for data exchange. So now, in reverse, archiving is looking to STEP, which was originally developed for data exchange but is now useful for archiving---for the same reasons as IGES was useful, the public neutral format.
Archiving would include here industries like steam generator turbines, where a mfr. might need exact information 20 or 30 years from now, when Pro/E may be just a memory. Of course, a fully dimensioned drawing is still the very best archive for mech. design (preferably a physical copy on microfilm, still the best archive medium). But you may not be making those drawings any more, or you may want to archive the 3D data in addition to drawings.
- Customize for Business Success
Paul Crane is in a central engineering postion at John Deere in Moline, and he does technology assessments of PTC software across the company. He sees a wide variety of business groups, since making a tractor is not at all like making a combine. And he's looking for opportunities to bridge the gap between what a tool like Pro/E can do, and what a common process requires, with a custom program. But it's important that a custom program be well justified and well used.
The most important general observation Paul had to make was probably this: automating an inefficient process is pointless. That's a common observation, but it still happens, over and over again.
Paul had 3 examples of custom programs:
- updating Pro/E files to Deere company standards. One part of the problem of maintaining company standards is that it is tedious work, no one wants to do it really. So a program fits. The Deere program designates parameters as needed, moves items to layers, orients views, checks relations, renames and reorders datums. It got its biggest single use when a group was moving to Intralink, but so far 607 users have saved 110000 hours with this program. Doesn't do so much, but is used a lot.
- a gear program to model internal and external helical gears in Pro/E. This isn't a program for designing gears, Deere has other tools to do that. But it creates the corresponding models for Pro/E assemblies. This program is used less often, but does save more time, at least 1/2 hr. or more per use.
- JDNest, a sheetmetal nesting program. It takes not just Pro/E outlines, but DXF and IGES too. You can copy the results between sites, and it can run static mode (same parts every time) or dynamic (real time mfg., any combination of parts on one sheet). There is a saving here of NC programming time, but also a real mfg. savings of material with efficient nesting of parts. This program gets a lot of use, and a lot of savings.
You can see Deere keeps tabs on these programs after they are released: how often they're used, and who uses them, and also the cost savings. There's a tip for any custom program, and it's not hard given a company network: collect the information who uses the program and how often. That can help justify the program itself, and also other programs afterwards.
If you don't know even how often a program is used, you can lose touch with the users. Paul showed charts of the data, showing highs and lows in the use of different programs over time, and also number of daily users (since one person alone could run it more than others). These charts gave a good deal of insight into how the programs are used.
PTC is in somewhat of the same situation: PTC sells a bunch of software to a company, but any vendor can have a hard time figuring out if people are using their software, and how much, and in what areas. Any software vendor could provide better and more timely support if they knew more about how their product was used, and that information is also possible over the Internet. Could be at some point customers have a choice whether to send that information to PTC: some might want to do it, others not.
Paul pointed out costs of custom programs, like training, Pro/Toolkit license if needed, develOPMent time. And also maintenance: it's been said that 80% of the cost of creating a software program is in the maintenance, after initial release. And that's true just as much of a relatively modest home grown custom program.
Just to be complete, Paul also listed risks of custom programs: those maintenance costs may increase, and vendors can supply the functionality at any time. You absolutely want to avoid creating a new process by adding a program: that's not the point, the point is to aid an existing process, by bridging a gap between a tool and the process. If you're creating a new process with your new custom program, you're probably creating problems instead of solving them.
- Concurrent Design at Motorola
Motorola made a major contribution to the user group by presenting the results of years of work to get industrial design and engineering to share the same Pro/E assembly successfully. The two presenters were Tim Sutherland representing ID, and Scott Bots representing Engineering. That was very useful just to see the two personalities interacting in their typical ways, ID always wanting to change anything on the exterior and Engineering struggling to keeps some features fixed and stable on the inside.
The example was cellphones. You might think that the external (customer visible) features of a cellphone would become stable early. But no, there are many variations on any basic cellphone model, often just the exterior appearance. Just one customer may ask for a selection from several different designs, all varying just by the exterior. And a cellphone isn't trivial: there can be 1000 dimensions down one side (mirrored), and up to 10 engineers working on different areas of the interior, like the board and keys and switches and a display and so on.
Four years ago they were using the Master Model method, but with poor geometry quality (occasional visible blips), unsymmetrical surfaces (they did mirror, but after the mirror later changes might appear), and a design which wasn't very flexible. Which didn't match the need to produce many variations on a design. And they even had strange imports from outside the Pro/E world, like Alias geometry. Back then they used no splines, just line segments (perhaps because of imported curves). And the Master Models weren't robust enough to support detailing: usually they'd develop the MM until it wouldn't shell any more, then have to add detail in target parts. The master part had about 500 features to it.
Now they have a process. A major change is they use splines, all native Pro/E splines. Tim emphasized making splines as simple as possible, minimum number of control points. People often think more control points must be better, but that's not so, because you start getting kinks and bends and the geometry starts getting complex fast and then fails easily
If you start simple, and get the end tangency the way you want, you may not need much more to finish the spline. A typical phone design begins with 2 parting quilts in space, representing the upper and lower parting lines (usually there's a vertical band all the way around the outside, between the parting lines). Those 2 quilts are in the first 5 or so features in the part, fundamental. From those two surface quilts, the top quilt is created, the top surface. The top quilt gets developed until it doesn't shell anymore (always a limit there), and then you go back a step and offset the top quilt to get an inner quilt. The master model is always all surfaces.
Keypad and mike and speaker holes are part of the top quilt, but penetrate down through the inner quilt. That way they should still intersect after any changes to the inner quilt. Tim recommended using the sketcher approximate splines "judiciously", typically when you're combining a spline with an existing surface. At this point half of the phone is designed, and here it gets mirrored to create the other half. By the way, those keypad holes on a cellphone are called "chimneys", that's the technical term.
On the inside of the phone, the core side, where those engineers are working, they make a comfortable and self-explanatory environment for the engineers by creating an "Engineering Home" coordinate system (that's the actual name). The engineers use that, and not the default coord sys which is sitting down in a corner somewhere. ID doesn't use Engineering Home at all, it's just for the engineers on the inside. Something they can trust.
To convert the quilts into a solid, a solid block is placed around the master model, but not extending past the parting quilts. The inner quilt cuts the block, and then the material inside the inner quilt is removed, making the cavity. The inner surface is basically simple, unlike the outside. If wall thickness changes, they just offset from the inner quilt.
After removing the inner material, then they remove the outer material from that solid block, and they have the thinwall part itself. And in general the outer skin can fluctuate without affecting the bosses and ribs and other internal features.
Engineers now work in insert mode, always seeking stable geometry, back in the model tree between the inner quilt and the outer quilt (outer quilt was created later, after the inner quilt). As usual, drafts and rounds are created as late as possible, with intent surfaces for drafts and intent chains for rounds, to tie those features to underlying geometry, and not just to some edge. A round on 4 edges will fail if they become circular (variations on a boss, for example), but not if it's an intent chain round. Features like ribs and bosses on the inside are "overbuilt", extend out to the outside of the solid block, so they can't fail due to a change in the internal surface, when ID modifies the exterior.
Now the engineers exit the insert mode, and create interior geometry that has to depend on outside surfaces, the surfaces that ID plays with constantly. This obviously is a risky step, and geometry created by the engineers here may not survive a redefine, engineers could lose 5% to 20% of their work at this step because of an exterior change. But usually for an engineer to fix the problem takes about an hour of work in resolve mode, while before it used to mean starting over from scratch.
So a major feature of the process seems to be that risk is accepted, and the area at risk well defined and known to everyone. For the engineers who want to avoid risk, their protection is to work as early in the design as possible, in insert mode, inside the inner quilt. While ID will work at the end of the model tree, manipulating the final outside surfaces.
Tim said they used ISDX little, because they need to manage dependencies (like the board inside). They did use ISDX once for a large lens, to get precise continuity, because Pro/E "doesn't like surface continuity". A tip for visual adjustment of a spline is to IGES it out and back in, and then use that IGES spline as a guide for changing the original spline.
In answer to a question, Scott said they use a skeleton as needed to provide shared references in the interior of the phone (like two matching bosses in the top and bottom halves).
- Top-Down Design: Evolving Process
Brian Adkins from John Deere gave what could be called a pretty sophisticated presentation on Top-Down Design (TDD). Sophisticated because he was emphasizing the overall process, breaking a larger task into small pieces (and then assembling them back again, not forgetting that vital step). This kind of process view usually seems to happen only after some time, with any new technology.
So Brian mentioned various tools used with TDD, skeletons and layouts and simplified reps. and whatnot. But he just mentioned them in passing, as tools you can use. But his interest was the process. There's no tool that defines TDD, not even a skeleton.
Brian was interested enough in the Top-Down Design process to find out the origins of the name. And it turns out it isn't a PTC name, or even a mech. design name. The phrase was originally used by Niklaus Wirth, a computer scientist who invcented the Pascal programming language, in a paper back in 1971. And he was just talking about software design. But he had the essential point, to break problems down until solutions become easy.
Divide and Conquer is Brian's favorite way to describe that general approach. For Pro/E and that TDD, he suggested: "efficiently distribute design tasks among multiple users and prevent downstream problems".
Or, Brian proposed, instead of saying "roduct First", try saying, "Structure First". That does suggest the kind of orientation that can make TDD succeed. Brian mentioned at Deere there are managers who want nothing to do with TDD. It would be interesting to know how many PTC customers have succeeded with TDD, and how many have failed. Could be the numbers are about equal, say.
You'd think that the first step towards success with TDD would be to send people to class. But Brian pointed out that the typical TDD class is very routine and regimented, and gives the students a script to follow, use these particular tools to get these particular results. In that kind of class, there isn't enough attention to planning and structure, which probably make most of the difference between success and failure.
So after people return from TDD class, Deere tries to salvage a chance of success with TDD by introducing them to TDD planning sessions. In a planning session, there is a screen showing Pro/E. But then there's another screen, side by side, serving like a whiteboard for diagramming TDD structures. Brian uses Visio as the appropriate tool for the virtual whiteboard, because it has many symbols and ways to describe relationships. So it's easy to sketch relationships between components, find out what kind of TDD structure looks good for a particular project (and that can vary, from one project to the next).
There are weak points and failure modes to consider, also the people on the project and their experience, also downstream uses of the TDD data. What might work fine for one project might be a real failure for the next, depending on these kinds of issues. Is the product going to be actually configured in Pro/E, or in Windchill, or in MRP, or somewhere else (and then, what about simp. reps, Pro/Program creation of parts, family tables, manual drawing changes to BOMs, will they be in that final configuration). Motion analysis is another issue, motion and TDD are "like oil and water".
And then you might use map parts, or copy parts, or copy geoms, or you might not use any of them. You might use skeletons, or you might not use any skeletons (use external data sharing instead). If you do use skeletons, you could have a separate skeleton control assy, and use copy geom then to get the info over to your assy. There may be a trend here to reducing the use of skeletons within a TDD assy
You probably want to think in advance, and document, how long your external ref. paths will be. What if one breaks? Will you even know it broke? What will it take to fix it? Is there a level in the assy above which no external refs are permitted?
Again thinking of sketches and diagrams, Brian suggested describing the information flow in a proposed TDD assy. For exmaple, if the information flows from A to B to C to D, you don't want to see a reverse current flowing back from C to B. You can use the Global Ref. Viewer within Pro/E itself to look at those arrows, those flows.
- Great Top-Down Designers of the Past
As Brian was talking about the importance of planning and structure for Top-Down Design, I wondered about some of the great designers of the past. I'm thinking of the mechanical designers of 70 years ago, say (the 1930's), who created airliners and battleships and railroad engines and every other kind of large machine, with just board and paper and ink.
Seems to me that those great designers, and there were hundreds of thousands of them just in the US alone, had to have a very sophisticated and deep knowledge of Top-Down Design. As teams and departments and companies, they had to know how to break down the biggest projects into the smallest necessary pieces, defining interfaces (although, they probably didn't use that word) and dependencies and rules all over the place.
So why are we having problems with planning and structure of large designs now, 70 years later? Don't we know more now than they ever did?
Well, perhaps we don't know as much, in the planning and structure of large mechanical designs. Back then, if you had a few hundred designers, they couldn't begin to do any work until planning and structure were complete. There was nothing they could do, just sitting at the board, until that job was done first.
Now however any person or group can start designing on a computer without paying any particular attention to planning and structure of their project. Perhaps our general approach (and in other areas besides mech. design, too) is that because the computer makes changes easy, we don't really need to plan as much in advance. Even though generally we find there's a price to pay afterwards when the changes do come. Time to market drives a lot of us, and sitting around planning doesn't look as much of a contribution to time to market goals as banging away on keyboard and mouse.
The advance planning and structure techniques that were routine and fundamental in large mechanical design 70 years ago probably now survive more in large software design and large electrical design (like microprocessors). Ironically, software and electrical design have produced mech. CAD, and mech. CAD has made it easier to start large mech. projects without thinking so much about planning and structure.
The story of Top-Down Design among Pro/E users, as Brian told it, seems to advance from concentrating on tools to concentrating on process. So if we work on process and planning and structure, some day we may share the same intuitive and deep understanding of Top-Down Design as those great Top-Down designers of the past. |
|