Wednesday, May 30, 2012

Be The Smartest Person in The Room


This one is really, really easy. Honestly, I can't believe we ever agreed  as an industry to do this the way almost everyone has been.

When worksharing is enabled and the first save creates your central model many people append _CENTRAL to then end of the file name. So when a local file is created you get: ProjectName_CENTRAL_UNSERNAME

It long, its redundant and the CENTRAL moniker does nothing for you. Instead use that space for something useful, something that provides valuable information about your Revit file. End your file name with a version number: ProjectName_2013_USERNAME

If you are currently appending CENTRAL to your file names, loft up this suggestion and I bet everyone agrees with you. An easy way to be, at least for a moment, the smartest person in the room.

Wednesday, May 16, 2012

Mass Effect




Modeling your project during the conceptual design can drive the most benefit during the "pre-schematic, post-pre-design" stage. Unfortunately this is usually not a time I a project when much attention is being paid to a 3D digital model. When you model conceptually you can gain a  deeper understanding of your project without spending abundant hours developing any single idea. Massing tools are how we model iteratively without weighing our ideas down with detail. 

Not all massing tools are created equal. Sketchup is great for quick modeling and visualization but the data behind the model isn't there and the downstream use of those objects in analysis and documentation in Revit is very limited. There are some tools that allow you to analyze inside of Sketchup, the IES toolkits have a plug-in for Sketchup, but I can't do anything with these models later on. More often than not designers misuse Sketchup by creating detailed forms just to get an image. This wastes hours on a project without creating anything to leverage later on. Models in Rhino usually suffer the same fate.

The massing tools inside of Revit can be used to create a 3D generic form that allow schedulable parameters of surface area, volume, perimeter by floor, and area by floor or building. These can also be used for high quality renderings via FBX export into 3DS Max Design, or as rapid energy models for comparative analysis. Exporting the mass model GbXML file also opens up many other analysis options in outside software. Autodesk Labs' Project Vasari expands on this even more with solar radiation and wind tunnel studies using mass models directly inside of a Revit-style interface.
The beauty of these types of massing studies is that they are quick and provide a lot of good comparative data. The key word there is 'comparative'. Model multiple options, then compare,  to understand and proceed with your design.

I have worked with some firms that take massing to a new level inside of Revit to show programmed areas conceptually by using separate mass objects with individual materials assigned for color differentiation. There is quite a bit of leg work there and if you are need to respond to a tight space program a piece of software like Affinity (from Trelligence) would probably be the best option.










It think a lot of us got caught up in the "low hanging fruit" pitch conceptual modeling allows. That is "low hanging fruit", not "fall into your lap fruit". There is some effort involved. You have to change your design workflow to accommodate for a new contributor, the mass model.

Thursday, May 10, 2012

Habitually Successful


While teaching BIM software during my career with IMAGINiT I have noticed common habits among those that pick up the software quickly and are successful with it.

I am sure I am not the first, nor will I be the last to put a list like this together. This was just something on my mind. Without further ado, in no particular order....

Click every button - When you are learning a new application this lets you see all that is possible and can give you insight as to how object, views, and features relate.

Follow exercises in class and practice outside examples - Throughout the course there are times for you to practice, use this time wisely. Start written exercises promptly, make sure you have some repetition on each part of the exercise, do them again after the class. Draw your house, this will help you ask the questions you don't know to ask until you are in a real project.

Come with curiosity and questions - When you leave the first day of class, look at the buildings you pass, start to think about how that building envelope might be constructed with objects in Revit. If you don't know the answer, ask and I bet everyone will learn something.

Not afraid to click their mouse -  Training classes for the most part are a controlled environment, you aren't going to mess anything up by clicking… a lot. At the same time be mindful of on screen cues.

Excited about the technology change and how it can effect their career - Attitude is 85% of the experience of learning new things, I see student get downtrodden over the smallest inconveniences in the software. Big picture is this is happening and this is the time to learn it.

Punctuality - architects are often late, contractors are often early, <insert another generality here>. Bottom line: If you miss 5 minutes, they might be "the" 5 minutes that clarifies an important aspect of the software.

Understand standard Windows functionality (for example: the ability to find and save something to a specific folder on your C Drive) - This type of training in a class really gums up the works. Comfort using a mouse is a must as well.

Stay adequately caffeinated - This one is not universal. For me though, it helps.

Tuesday, May 8, 2012

User Created Worksets

One of the first things I look at when performing a Revit Health Check for a client is their use of user created worksets. First, is the list very long with some worksets representing information that could possibly be used for downstream use (e.g. Revit schedules, Navisworks search sets, e-SPECS bindings)? Second, are the objects correctly assigned to their respective worksets? 
 
More often than not, the list of user created worksets is very long and the components are not placed on the correct workset (Thanks to Revit 2012 which makes this process much easier). Worksets should never be used to represent data that needs to be extracted because defining an object's workset is a manual process. The beauty of Revit is that it eliminated the manual process of defining an object's type/layer. Also, the data inside of the Revit components was organized inside of database that was part of every object in Revit. Revit MEP systems are a perfect example of this, as you connect the objects of a specific type (e.g. Hydronic Supply, Return Air) it populates with that data. In essence you couldn't be wrong with Revit.
 
Worksets should instead be broad categories that more directly relate to job roles. Limit the number of worksets and your users will get it wrong less, allow them to "set it and forget it" (If I may steal a line from the infomercial giant Ronco Inc.). The worksets will be easier to manage, and you can confidently use all the great performance enhancing benefits of user created worksets.
 
Look for more content on User Created workset best practices for all disciplines on the IMAGINiT Portal and ProductivityNOW.

Thursday, May 3, 2012

Being a Bad Background

In an earlier post I described how architects and MEPF engineers can learn to benefit from a multi-discipline Revit environment by respecting and anticipating the pain points and natural workflows of the other. Now I'd like to talk about structural engineers.

The reliance on linked geometry to host elements really isn't present between Revit Architecture and Revit Structure. In stead there is the issue of redundant modeled geometry and the documentation reliance on structural elements in architecture.

Many firms have worked through the first of the two issues by defining clearly what elements the structural engineer will own and which the architect will. Structure might own the slab, architects might own the floor finishes. Structure might own the roof deck (usually modeled as a floor) and the architect will own the roofing above the deck (usually modeled as a roof). It takes a thorough LOD document, but it can and has been accomplished.

The other issue here is a little more troublesome, an architect's reliance on structural elements to complete certain document deliverables. A lot of architectural firms fake in structure so that they can get documents out the door (e.g. foundation and stoop conditions, trusses, framing). Significant and detrimental time is lost when this is necessary to do but sometimes it is necessary.

Architecturally you have to communicate those shared items that are required and when they are required. Structurally you have to make modeling accommodations for the architectural documents.

I can hear the structural engineers now "easy for him to say". Well it is easy for me to say and it is easy for them to do. May I be the first person to say(although probably not really the first): structural engineers have had the least amount to change and adjust to in a Revit workflow. A little cooperation will go a long way on this one. 

I want to point out that LOD really solves both of these issues outlined above, but it doesn't have to be the AIA e202 document. Think about a collaborative requirement, think about a usable "desktop" standard, think about a logical and timesaving document that might just save your profit and really set you apart. Ok, horse officially beaten.

Tuesday, May 1, 2012

Being a Bad Host

I remember when I first starting using Revit Architecture, we were modeling key structural elements ourselves and using linked CAD files from MEP to complete our RCPs. I would bug my reseller at every turn to help me understand how the disciplines would work together without answer. Then an "Autodesk guy", as he was referred to in later conversation, told our Revit users group that Revit MEP wasn't ready yet and that no one should use it. They of course continued to sell it.
 
Many years later I am 3.5 years into my use of Revit MEP and have had the pleasure to see how many different firms are exploiting or suffering through a multi-discipline Revit workflow. Revit MEP and Revit Architecture's problems seems to be with the high level of dependency MEP objects have on host Architectural faces.

It is very easy for an architect to delete and recreate geometry that is, unapparent to them, the host of an MEP object. This destroy's the "warm and fuzzy" feeling Revit gives us about elements staying spatially coordinated.  In fact it can be a nightmare even if the element remains hosted. For example, sometimes a ceiling might move for some design purpose. If elements are hosted to those that have hard connections (e.g. ducts and air terminals) the architect runs the risk of destroying duct networks that don't have the space to adjust.

2 strategies need to be undertaken to make sure this works better:

First, Communicate design changes out side of Revit. This one is an age old problem between architects and MEPF consultants. You either need to setup a brute force way of communicating (email report of model changes by room upon receipt of a new model) or a software centric comparison automation (in Navisworks or the Compare Models Revit extension).

The most important change that needs to happen is both parties understanding the limitations, implications, and realities of a coordinated workflow in Revit.

Second, MEPF engineers need to respect and anticipate how the architect's model will change. MEPF engineers can host elements on reference planes where hosted and orphaned elements are sensitive to change. The model adjustment will be manual but many hosted elements can be changed at once. Or maybe no elements should be hosted at all. I ran an unconference session at this past AU called "Leveraging and Architect's Model in Revit MEP" and this was the sentiment of the group that attended. Those that had once hosted on planes no longer do so. A vast majority actually have a standard for non hosted components for everything.

Bottom line, this is a two way street. Architects need better ways to communicate changes per model update, and MEPF engineers need to host (or not host)objects in a way that protects them against damage caused by changes out of their control.