Forum Replies Created
-
AuthorPosts
-
mkoopmansParticipant
Hi Jeff,
The PHM Society’s third annual conference was a great success from my perspective, so thank you as chair (along with the management team and board of directors) for working to find a balance between academia, government, and industry. Making a pilgrimage to meet the rest of the Society enabled me as a researcher and us as a community to learn, interact, and take a collective pulse on the state of the science/art of PHM.
This is my second PHM conference as a paper-presenting student, so my take will be different than most attendees. However, I think I speak for many when I say that the next conference should have more of the same – we all know that the breadth of knowledge and its corresponding mediums are rarely found elsewhere.
An aspect of PHM that I would like to see more of is systems engineering. The bridge from laboratory to platform is so important and difficult to ascertain that I believe it warrants more attention – the paper sessions report many breakthroughs in the laboratory (composites, electronics, batteries), yet few successes in the field (HUMS @ Sikorsky). Granted, doing research on the system level is extremely difficult, expensive, usually proprietary, and sometimes classified. I think the staff at NAVAIR have the right idea of using a test squadron to see how a PHM operational fleet would work. Papers or presentations of how the PHM requirements were transferred into design reqs (data transfer rate, power consumption, sensor mass) followed by the implemented solution would be very interesting. And more generally, given an operating system (UAV, ground vehicle, etc.) how much more effort (measured in: time, money, computational power, data analysis, model validation, etc.) is needed to implement an operational prognostics architecture?
It is easy to dwell on the problems associated with the aerospace sector; other domains were covered in the excellent paper sessions. Proving the utility of PHM in other fields/domains will hopefully give aerospace designers more leverage as the technology ‘earns its way onto the aircraft.’ Some notables included Giulio Gola’s talk on choke valve prognostics within the oil & gas sector, where reliably gathering data from human maintainers was one of the largest obstacles in a confident RUL estimates and Peter Ghavami’s presentation that compared model predictions under the case study of deep vein thrombosis in hospital patients.
I enjoyed seeing the work coming out of NASA Ames, particularly their emphasis on using real data and fielded systems (ground robots, UAV batteries, and pneumatic launch valves) validation. In addition, Bob Randall’s machine diagnostics tutorial and Scott Clement’s introduction to prognostics were both great technical presentations.
Another suggestion is to hold the fielded systems panel sessions near the beginning of the conference, instead of at the end when many attendees have departed. As a student it is very interesting to hear the arguments going on after the presentations – last year’s revolved around the correct design methodology / basis on which to derive prognostic requirements – RCM or an offshoot? While this years corresponded to failure models – their definitions, construction, and inference. Holding it earlier will also give people an incentive to talk more about PHM problems with others.
Another rabbit hole in my opinion – PHM software. The folks at NASA Ames have taken steps in trying to formalize and standardize PHM results and research. I did not see many papers using their proposed evaluation metrics. Both Jeff Banks and MIke Houck mentioned the problems they encounter with software – Army Bulk CBM Data (and its ownership) and how the Navy lacks a common ground station (subsequently disallowing common tools from being shared). This problem may be a fruitful one for software engineers as to research the most effective path through a system for PHM-related data and its analyses.
Now I believe my major points have been addressed, so my feedback/suggestions in summary:
– more systems engineering
– earlier fielded systems panel session
– continue to promote PHM to other domains/fields (oil & gas, human/patient health, biology, renewable energy, consumer products, automotive, etc.)
– continue data challenge, consortium, demos
– continue the outstanding social program
– encourage large companies, contractors, agencies, and groups to share their lessons learnedI find it exciting to be involved with PHM – rarely do we see an engineering field evolve and grow so quickly, across so many disciplines. And so, thanks again, may the Society’s researchers, reviewers, and sponsors continue their arduous endeavor. See you at next year’s conference.
Michael Koopmans
Oregon State UniversitymkoopmansParticipantHi Jeff,
The PHM Society’s third annual conference was a great success from my perspective, so thank you as chair (along with the management team and board of directors) for working to find a balance between academia, government, and industry. Making a pilgrimage to meet the rest of the Society enabled me as a researcher and us as a community to learn, interact, and take a collective pulse on the state of the science/art of PHM.
This is my second PHM conference as a paper-presenting student, so my take will be different than most attendees. However, I think I speak for many when I say that the next conference should have more of the same – we all know that the breadth of knowledge and its corresponding mediums are rarely found elsewhere.
An aspect of PHM that I would like to see more of is systems engineering. The bridge from laboratory to platform is so important and difficult to ascertain that I believe it warrants more attention – the paper sessions report many breakthroughs in the laboratory (composites, electronics, batteries), yet few successes in the field (HUMS @ Sikorsky). Granted, doing research on the system level is extremely difficult, expensive, usually proprietary, and sometimes classified. I think the staff at NAVAIR have the right idea of using a test squadron to see how a PHM operational fleet would work. Papers or presentations of how the PHM requirements were transferred into design reqs (data transfer rate, power consumption, sensor mass) followed by the implemented solution would be very interesting. And more generally, given an operating system (UAV, ground vehicle, etc.) how much more effort (measured in: time, money, computational power, data analysis, model validation, etc.) is needed to implement an operational prognostics architecture?
It is easy to dwell on the problems associated with the aerospace sector; other domains were covered in the excellent paper sessions. Proving the utility of PHM in other fields/domains will hopefully give aerospace designers more leverage as the technology ‘earns its way onto the aircraft.’ Some notables included Giulio Gola’s talk on choke valve prognostics within the oil & gas sector, where reliably gathering data from human maintainers was one of the largest obstacles in a confident RUL estimates and Peter Ghavami’s presentation that compared model predictions under the case study of deep vein thrombosis in hospital patients.
I enjoyed seeing the work coming out of NASA Ames, particularly their emphasis on using real data and fielded systems (ground robots, UAV batteries, and pneumatic launch valves) validation. In addition, Bob Randall’s machine diagnostics tutorial and Scott Clement’s introduction to prognostics were both great technical presentations.
Another suggestion is to hold the fielded systems panel sessions near the beginning of the conference, instead of at the end when many attendees have departed. As a student it is very interesting to hear the arguments going on after the presentations – last year’s revolved around the correct design methodology / basis on which to derive prognostic requirements – RCM or an offshoot? While this years corresponded to failure models – their definitions, construction, and inference. Holding it earlier will also give people an incentive to talk more about PHM problems with others.
Another rabbit hole in my opinion is PHM software. The folks at NASA Ames have taken steps in trying to formalize and standardize PHM results and research. I did not see many papers using their proposed evaluation metrics. Both Jeff Banks and MIke Houck mentioned the problems they encounter with software – Army Bulk CBM Data (and its ownership) and how the Navy lacks a common ground station (subsequently disallowing common tools from being shared). This problem may be a fruitful one for software engineers as to research the most effective path through a system for PHM-related data and its analyses.
Now I believe my major points have been addressed, so my feedback/suggestions in summary:
– more systems engineering
– earlier fielded systems panel session
– continue to promote PHM to other domains/fields (oil & gas, human/patient health, biology, renewable energy, consumer products, automotive, etc.)
– continue data challenge, consortium, demos
– continue the outstanding social program
– encourage large companies, contractors, agencies, and groups to share their lessons learnedI find it exciting to be involved with PHM – rarely do we see an engineering field evolve and grow so quickly, across so many disciplines. And so, thanks again, may the Society’s researchers, reviewers, and sponsors continue their arduous endeavor. See you at next year’s conference.
Michael Koopmans
Oregon State UniversitymkoopmansParticipantProblem solved using Stuffit.
Mike -
AuthorPosts