posts - 21, comments - 8, trackbacks - 0, articles - 0

导航

Aspect Oriented Programming (AOP)

Posted on 2005-06-04 18:01  chen77716  阅读(...)  评论(...编辑  收藏
Interview with Gregor Kiczales
Topic: Aspect Oriented Programming (AOP)
http://www.theserverside.com/events/videos/GregorKiczalesText/interview.jsp


Gregor Kiczales lead the PARC team that developed AOP and AspectJ. He is a well-known evangelist for AOP, focused on building both the practioner and researcher communities. He has 20 years of experience in developing advanced programming technologies and delivering them into the hands of developers. He was a member of the ANSI CLOS design team, the implementer of the reference implementation of CLOS, the lead designer of the CLOS metaobject protocol and a co-author of "The Art of the Metaobject Protocol" (MIT Press, 1991). He is now NSERC, Xerox, Sierra Systems Professor of Software Design at the University of British Columbia, where he works with students on a variety of research in AOP, including application to operating systems, refactoring, patterns and new language constructs.
Dion: Tell us a little about the history of AOP?

Gregor: AOP grew out of research by several different groups, building on several different strands including reflection and variant strains of object-oriented programming. The early research - I am talking here about the early 90s - was driven by more than ten years experience building systems with OOP, and really beginning to see some of the limitations of OOP, in big systems, in distributed systems, and in systems where flexibility was critical.

The team I was fortunate enough to lead, at Xerox PARC, first worked on reflection and metaobject protocols, which are now considered critical to providing certain kinds of flexibility in OO programs. We then worked on an idea called open implementation, which was similar to what is now called white-box abstraction. Finally we came to AOP, where we felt we had focused in on a key issue common to these technologies - enabling programmers to modularize crosscutting structure in their code. Our work on AspectJ began in 1997, the first outside users were in 1998, and the 1.0 release was in 2001. The AspectJ work was supported by.DARPA

Dion: What does crosscutting structure mean?

Gregor: It means there are two or more structures (or decompositions) such that neither can fit neatly into the other. If we look at the classic AspectJ figure package example, shown in the diagram (below) we see one structure, in black, that includes FigureElement, Point and Line; this structure talks about the state and drawing behavior of the figure elements. The second structure, in red, includes the DisplayUpdating aspect; it talks about how a change to the state of the figure elements should trigger a display refresh.

One aspect of the setX, setY, setP1 and setP2 methods is described in the black figure element structure - how the setter methods change the state of figure elements. A different aspect of the same methods is described in the display updating structure - that they all refresh the display after being called.

(I know this example seems trivial, and that it should actually keep track of exactly which displays each figure element is on, but in this simple form it really helps to see what crosscutting means.) In the figure, the DisplayUpdating aspect is a single unit. But that same behavior could not be localized as a single unit in the black structure. Trying to do so would force the code implementing the behavior to be scattered across all the methods that change display state. We say that the structure of the figure element state concern and the structure of the display refresh concern crosscut each other.

Crosscutting structure differs from hierarchical structure. Crosscutting means carving the system up differently, not just removing details to get a more abstract view.

Crosscutting is a deeper property than scattering. Scattering refers to the fact that in a given implementation, the code for a concern is spread out. Crosscutting refers to the inherent structure of the concerns, e.g. that the main graphical behavior and the display refresh behavior inherently crosscut each other. The goal of AOP is to enable the modular (not scattered) implementation of crosscutting concerns.

Dion: What is the state of AOP?

Gregor: AOP technology and practice is now moving from the invention phase to the innovation phase. A new wave of people is discovering the technology and helping to drive its widespread adoption. So this is a great and exciting time.

The invention phase was characterized by more focus on real-world applications, and a much more rapid move to commercialization than is typical for programming technology. But of course, from the perspective of innovators, any invention phase looks slow and the new movers in the space are looking to speed things up. As a community, we are now looking for the first killer application for AOP. Many people are betting that will be distributed enterprise applications.

Dion: What are some of the challenges you see facing AOP development at this time?

Gregor: In answering this question I think we can learn a lot by looking at the history of OOP, particularly during the 80s. The first major challenge is to avoid over-hyping the technology. We have to be careful not to let our enthusiasm make us sound like we are claiming AOP will solve all software problems. AOP is a modularity technology - to the extent you have good developers, who understand the domain, AOP will help them develop better, cleaner, more reusable code more quickly and easily. It will actually do a bit more than that, because as we learned with OOP, good modularity technology can help you figure out your domain as you go. But AOP will not make all software simple. It will not solve all modularity problems. It is an important new tool in the developer's arsenal that solves a thorny class of software structuring problems. We have to be careful about this so that we do not appear to be making wild claims.

Another challenge is to be sure not to forget what we have already learned. Whenever a technology goes from invention to innovation, there is a danger that some of the lessons of the previous phase will be lost. There is a natural tendency for this to happen, as innovators re-work and re-package the technology to serve specific technical/market niches. I think it is vital that the inventors and the innovators work to try and build bridges to each other - again we can learn from OO, where the OOPSLA conference was an excellent example of building this kind of bridge. I would like to see the innovators and the inventors work to communicate actively. Some design tradeoffs made in the previous phase may not be appropriate for today; but there are key technical issues learned in the previous phase that should be carried forward. I think we can all learn a lot from this exchange, and will help us really move forward.

Unfortunately, this kind of bridging is not easy - I know I have already made some missteps trying! The annual Aspect-Oriented Software Development conference (http://aosd.net/conference) aims to follow the OOPSLA model, and so it is one good place for this bridging to happen. Next spring the conference includes a keynote by Danny Sabbah (IBM Vice President, Development and SWG Technology, Application Integration & Middleware Division, Software Group). Since IBM is making significant investments in AOP this talk should be quite interesting.

Another critical point is that the AOP community has to compete with each against the status quo, rather than against each other for the existing AOP users. Here again you can look at the success of the OOP community. At the early OOPSLA conferences there was a concerted effort for the different language camps to talk about what was good about the other OO languages, and to promote OOP in general over OOP in particular.

Dion: Many people seem to use the same system concerns to talk about AOP (e.g. logging, debugging, transaction demarcation, security etc.). Can you give any examples of crosscutting business concerns that you have come across?

Gregor: The first thing I would note is that the concerns you mention are themselves significant. But to get to your question, there are many other concerns that, in a specific system, have crosscutting structure. Aspects can be used to maintain internal consistency among several methods of a class. They are well suited to enforcing a Design by Contract style of programming. They can be used to enforce a number of common coding conventions. A number of OO design patterns also have crosscutting structure and can also be coded in a modular and reusable way using AspectJ. There are also good examples in the books on AspectJ.

So programming with AOP involves programming with both classes and aspects as natural units of functionality, and aspects, like classes are really a generic concept. Adrian Colyer, of IBM, who is the AspectJ project lead on eclipse.org, puts this well:

When anyone first comes to AOP, the examples they see and the kind of concerns that they deal with tend to be very orthogonal to the core application they are working on (or non-functional if you prefer that term). I am talking about the classic things like tracing and logging and error handling etc. In the AspectJ tutorial we sometimes call them auxiliary aspects.

When you use a tool like AspectJ that supports AOP in the same way we are used to getting support for OOP, then you start to see not just auxiliary aspects, but what I call core aspects throughout your code. Personally, I will even use an inner aspect inside a single class to handle a policy that cuts across the methods of that class (acquiring and releasing connections in a db access class for example) where it makes the intent clearer.

At the AOSD Conference, Ivar Jacobsen gave a talk in which he explained that uses cases have crosscutting structure - that is a key part of what makes them useful - and that you can use AOP to allow modular implementation of use-cases.

Dion: One big difference among the toolkits seems to have to do with weaving time and deployment time.

Gregor: One thing that happens in an innovation period is that the engineering tradeoffs get re-examined in light of the specific niches for which the technology is being considered. When we did AspectJ we did not consider dynamic deployment of aspects because we saw some performance overhead there, and we wanted to ensure AspectJ had no hidden performance overhead.

But now it appears that at least for some applications, the power of AOP with dynamic deployment is worth the added performance costs. People like Mira Mezini are adding dynamic deployment to AspectJ, and most of the newer frameworks, like JBoss, work entirely in terms of dynamic deployment.

My sense is that as things progress, people will start to focus on performance. AspectJ already has good performance for static deployment, and the issues in making statically-typed dynamic deployment efficient look tractable, so we should be able to deliver excellent performance in any AOP tool.

Dion: Do you see a future where people use one tool for compile-time weaving, and for run-time weaving? Where will standards for this come from?

Gregor: In AOP terminology, weaving means the coordination of the crosscutting aspects with the rest of the code. So it can be done by a pre-processor, a compiler, a post-compile linker, a loader, the JIT or the VM.

Where we are now is that there are different tools for weaving at different times. Some of the newer frameworks, like AspectkWerkz and JBoss AOP, weave at run-time using interceptors. AspectJ does weaving at compile or post-compile time.

In designing AspectJ we worked hard to separate the issue of when the weaving happened from the semantics as much as possible. So AspectJ is amenable to weaving along the full range from compile time to VM support. I believe that as we observed with OOP, this kind of semantics for AOP is going to end up being the easiest for developers to work with. So I guess I hope we move to a world where people use different tools because they offer different semantics, rather than because they weave at different times. In other words, weaving is an implementation technique, like method dispatch, that should stay somewhat behind the scenes. We should choose the AOP tool we want for the semantics it provides (assuming there is an adequate implementation).
Dion: What do you think about the need for standardization?

Gregor: This is an interesting question. Java has done pretty well for itself following a rule of not including any new ideas in the actual language standard. By "new" here I mean that have not been in the innovation phase for some time). I think this kind of conservatism is a good strategy for something with as many users as JLS.

But AOP is not that new anymore. The idea is 8 years old, and AspectJ has had users for 5 years. The core elements have been stable for several years now. At this point I believe it would serve the community well for there to be some kind of standardization that could reduce needless differences between AOP tools, while at the same time leaving room for continued experimentation.

I see at least two ways such standardization could work.

AspectJ is currently the de-facto standard, and that could just become more solid. The community could identify the core of AspectJ, and the AspectJ Open Source developers could agree to keep that stable. The fact that AspectJ is now part of Eclipse means there is growing support for it, so this could be a good way to proceed.

But some people have concerns with AspectJ, particularly around dynamic deployment, syntactic changes to Java, and size. So another possibility would be to develop an AOP kernel that could support a range of systems including AspectJ, JBoss, AspectWerkz and even research systems. This kernel would not define any syntax, just core semantics. This could first be developed as a draft standard of some sort. This would allow a period of a couple of years in which low-level implementation experts could focus on efficiency, security issues etc.; and at the same time, others could focus on how best to package the programmer interface to that technology.

Dion: What are your thoughts on the relationship between metadata (JSR-175) and AOP? Will we define pointcuts based on metadata on methods?
NOTE: From followup talk on the discussion thread

Gregor: It's clear that using attributes with AOP is very useful. As soon as JSR175 is done I suspect AspectJ at least will be extended to allow pointcuts to match based on attributes.

As time goes by, we'll learn when its best to use pointcuts based on attributes and when its best to use other kinds of pointcuts.

In using pointcuts based on attributes, I think there are some subtle style issues, that we're going to have to work out. Some of these have been raised in this other TSS thread.

It is clear that certain styles of using attributes are "just macros" or "just declarative programming". Of course that isn't a bad thing per se, but it may fail to give all the modularity benefits we want. It also may make it difficult to use the most powerful feature of AOP -- pointcut composition.

I *think* that when using attributes we should be careful to give them names that describes a property of the 'POJD' (plain old Java declarations) they describe, and then have an advice-like construct say what aspect should apply to POJDs with that attribute. This is in contrast to much existing practice where the attribute directly names the aspect that should apply.

To be concrete, let me use the well-worn figure example. I believe we should write code like this (assume of course the method is in a class and the advice is in an aspect):

@ChangesDisplayState
void setX(int x) { this.x = x; }

after(): call(@ChangesDisplayState * *(..)) { Display.update(); }

Rather than code like this:

@UpdateDisplay
void setX(int x) { this.x = x; }

after(): call(@UpdateDisplay * *(..)) { Display.update(); }

The reason I believe this is that it better modularizes the crosscutting concern.

The crosscutting concern in this code is "methods that change display state should update the display". In the first code that is modularized in the advice. A second concern, "which methods change display state" is spread out.

In the second code the two concerns are mixed together and both are spread out.

I believe the practical impact of following this style is that you will end up with attribute names that are more reusable -- you will be more likely to be able to use an existing attribute in a new pointcut in useful ways.

P.S. Note that if you write without attributes, something like:

pointcut changesDisplayState(): call(void Point.setX(int)) || ...;

after(): changesDisplayState() { Display.update(); }

then both concerns are well localized in the code! Which just goes back to that we have some learning to do in terms of guidelines for when to use attributes and when not to. I'm writing something about that now, but its not ready to send out.


Dion: How do AOP and OOP fit together?

Gregor: AOP complements OOP. AOP can complement other paradigms, like procedural programming. But since OOP is the dominant paradigm now, most AOP tools are extensions of OO tools.

That AOP complements OOP means more than just the various AOP tools being extensions of Java (or .NET). It means that the practice of AO programming complements the practice of OO programming. Good OO developers learn AOP fairly quickly, because they already have good intuitions about what parts of their systems are crosscutting. This is also reflected in the result of their work - for example, a good AspectJ program tends to be 85% or more a good ordinary Java program.

Dion: Some people say that AOP violates principles of OO modularity. What do you think of that?

Gregor: The principles of modularity are things like textual locality, narrow declarative interfaces, less coupling, greater cohesion etc. Used properly, AOP does not violate those principles, it helps you achieve them. To see why though, you have to be sure to think about examples where it is appropriate to use AOP, not of examples where plain OO already produced good modularity.

In an OO program that would benefit from AOP, like the screen refresh example from above, the modularity of a plain OO solution would be compromised. There would be code for triggering screen refresh scattered through the Point and Line classes. The code for that concern would not be localized, would have no clear interface, would be textually coupled, and would reducecohesion of the code in which it is scattered.

If you write the same program with AOP, the modularity is much better. The code to trigger screen refresh code is localized, it has a declarative interface to the rest of the code (the pointcuts), it is less coupled, and all the code has better cohesion.

Of course that does not mean it is impossible to write non-modular code with AOP. Of course it is. It also does not mean we have no work to do in terms of giving developers simple rules of thumb that produce elegant AOP code (by rules of thumb I am thinking of things like the use private fields and accessors rule for OO).

But I believe in giving programmers tools that enable writing beautiful programs, not putting enough safety mechanisms on that they cannot write bad programs. And the evidence is clear that a number of programmers can use AOP to improve modularity.

Dion: What about the concern that AOP makes it hard to understand your program.

Gregor: AOP makes programs easier to understand, not harder.

Again, you have to think about examples where AOP is needed, not examples where OOP alone produces good modularity. It is also useful to think in terms of the metaphor of not being able to see the forest for the trees.

In a non-AOP program, where you have a scattered implementation of a concern like cache invalidation, it is easy to see whether the particular line of code you are looking at invalidates the cache - there will be a call to some invalidate method there. That call is a tree. But what is hard to see is the forest - the global structure of cache invalidations that tell you what is really going on, and that you use to figure out the hard problems.

In an AspectJ program, the call to cache invalidation does not show up in the code where it happens, but simple IDE support gives you a visual reminder that the call is there. So in an AspectJ program a simple tool lets you see the trees. You can also navigate to the advice and by looking at the pointcut, clearly see the global structure. So the AspectJ program makes the complicated part - the forest - explicit, declarative and clear. The simple part - the trees - is easily recovered by the tools.

Dion: There seems to be a lot of debate about whether AOP support should extend the language syntax, as with AspectJ, or use some kind of XML syntax, as with tools like AspectWerkz. What is your opinion on this?

Gregor: I think this issue is overblown. I do not mean to say it is not important; it is for a number of reasons. But none of them have anything to do with AOP per se. For example, you could design XML syntax for AspectJ semantics fairly easily. The key issues of AOP are essentially orthogonal to this syntax question.

Dion: What in your view are the reasons people think the syntax issue is so important? Is this just like in OOP where people started to say that "I do OO in C… I don't need another language"?

I think that is exactly right. For many people there is a natural reluctance to put a new idea in the language before they have played with it as a framework of some sort first. I think that reluctance is probably a good thing; we need to be very conservative about what goes in the language. But again AOP is hardly a new idea at this stage, and the AspectJ design is based on years of experience and user feedback. I think that AOP will end up AOP in the language, not in extra-language frameworks, but it is going to take a bit more time before everyone is comfortable with that.

In addition to the kind of good conservatism I just mentioned, there is also a not terribly sound reason people sometimes give. Because it is common, let me address it here. People sometimes say that a tool like AspectJ is harder to learn because it "requires learning a new language, whereas an AOP framework does not". This I think is just wrong - both kinds of tools require learning a new sub-language. In the case of something like AspectJ, that sub-language is tightly embedded in Java. In the case of something like AspectWerkz, that sub-language is encoded in XML. They are both little sub-languages that developers must learn, and in the framework case they must also learn the framework API.

Your question alludes to OOP from the 1980s, when some people said that OOP should not be added to the language; instead we should program it with frameworks. This came up in a discussion on TSS, in the spring. I think the language approach was right for OOP, and I think it is right for AOP. But I also believe the current innovation around frameworks is valuable, because it helps us explore new design tradeoffs.

An additional advantage of the language approach is that something like AspectJ is actually less powerful than a framework like JBoss. It is less powerful because it inherits things like static typing constraints from Java. This kind of being less powerful can actually make it more broadly usable, because it enables IDE support that makes it easier to understand what the code is doing.

Dion: What is happening with AspectJ?

Gregor: AspectJ is now an Eclipse Technology PMC project, and Adrian Colyer of IBM has taken over as the project lead for the community of developers. This gives AspectJ stable long-term development support, which is great news for AspectJ users.

The 1.1 release just came out, and it features incremental compilation, which facilitates working with large systems; byte-code weaving, which makes it possible to use other Java compilers; and support for aspect libraries. The IDE support has also improved significantly. Eclipse now supports aspect debugging, inline advice annotation, and a high-level, or forest, view of how aspects crosscut the system. Implementation of features like aspect-oriented refactoring is underway.

In addition to IBM and Eclipse, other vendors are supporting AspectJ. IntelliJ is providing early access support for their implementation, and BEA is supporting developers to use AspectJ in WebLogic applications.

Dion: Where should people go to learn more about AOP?

Gregor: There are more sources of training and support all the time. Tutorials are available at The Server Side Symposium, No Fluff Just Stuff, Software Development, OOPSLA, and of course AOSD conferences. Many of these are focused specifically on AOP for enterprise applications. (In fact there is an AspectJ and a JBoss AOP tutorial at NFJS next weekend in Seattle.) There are also several books on AspectJ.

There are now consultants who provide training, mentoring, and design and implementation assistance. Some of those consultants can be reached through an informal consortium I set up called AspectMentor™ (http://aspectmentor.com).

Dion: Should developers be looking to play/work with AOP now?

Gregor: I think any developer who wants to be up on the latest technology should, without a doubt, begin to explore AOP. One thing we learned with OO was that the earlier adopters got more of the benefit. With AOP, like with OOP, the real value is going to come from thinking aspects through the whole product architecture and development cycle. The organizations that start sooner get way ahead of the others. They attract the best people, and are the first to develop aspect libraries and aspect design expertise that have significant business value.

At the same time I would discourage people from jumping headlong into a full-scale switch to AOP. Another thing we learned from OO was that properly managing adoption is critical to success with a new technology.

Dion: Do you believe AOP will enter the mainstream by an incremental approach?

Gregor: Incremental adoption is critical. Developers are much too busy to look at technology that requires a clean start, and you need to have some time to explore the ideas before you jump in full-scale.

AspectJ for example was carefully designed to support incremental adoption of AOP. It does this by being JVM compatible, JLS backwards compatible, and having tool support like IDE and Javadoc extensions.

In addition, in the five years we have been doing AspectJ we have developed a staged adoption process. In this process, the first use of aspects is as a development tool, debugging, enhanced testing, or internal contract checking. Another scenario is to use AspectJ to prototype a crosscutting concern, and then once you have it figured out, manually weave the code in, with the AspectJ code as a comment. This development aspects style allows developers to experiment with AspectJ without needing buy-in from their whole development team, or really even their boss.

After a few months of working this way the developer has two things: a better understanding of AOP with AspectJ, and concrete examples of the specific value it would bring to their code base. That is the best point to try and sell the rest of the development team and the managers, and to start using production aspects that ship with the product.

Dion: Do you think AOP will revolutionize software development like OO did and why?

Gregor: This is a fun question. My answer is yes and no. On the one hand, I think the idea underlying aspects is in some ways deeper than the idea of objects. Objects were a new way of carving up the system into a single hierarchical decomposition. They were similar to procedures in that way.

But aspects are about having multiple, crosscutting decompositions. The idea that we can organize compositional software structures that do not have to be hierarchical is significant.

But the industry is so much more mature now than it was when objects were invented in 1964 (and innovated in 74, 84 and 94). So while I expect aspects will have a significant practical impact, it's hard to say how that will look in comparison to the many things that have happened in the last 40 years.

But clearly the ability to modularize crosscutting concerns will have a big impact, not just in distributed enterprise applications, but also in any other complex software we develop. That will improve the flexibility and quality of software we develop and it will reduce development time and costs. It will also make software development more fun.
作者:chen77716 发表于2005-6-5 2:01:00 原文链接
阅读:520 评论:0 查看评论