tag:blogger.com,1999:blog-69433662024-03-06T23:35:56.087-08:00Unknownnoreply@blogger.comBlogger31125tag:blogger.com,1999:blog-6943366.post-31736055290875082562013-03-03T10:11:00.000-08:002013-03-03T10:11:25.946-08:00Mercurial Cherry-Picking: Branches vs Clones<div dir="ltr" style="text-align: left;" trbidi="on">
At LogicBlox we're using Mercurial, and generally we are quite happy with it (before Mercurial we used CVS and later Subversion, so that wasn't too much of a competition for Mercurial). Mercurial offers quite a few alternative methods for managing diverging development though, which can be a bit confusing. In the early days we mostly used clones, except for the branches of work of individual developers. After some debates on the confusion of having two different methods in use, we switched from clones to branches.<br />
<br />
This weekend I was doing a merge of two branches, and accidentally introduced a problem that was caused by earlier cherry-picking of changesets between the two branches. I thought this really should not have been possible, so I decided to sanity-check my understanding of Mercurial branches. It turns out that cherry-picking and Mercurial branches really do not work well together.<br />
<br />
The example I used is as follows:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkXj4xsxKga5ixkS7Uhw5zqE3If5RJ_0zsb5vBDP2UuF-vLvLTk3Zh5Zd2Qh_jHsm0E1U75m_klvEA2OC_SKKqptFvUrkR9CMMhSOK5scfQ5rSRrILvTk14ZRsgV5YYn56aqUp/s1600/Untitled+drawing.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="256" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkXj4xsxKga5ixkS7Uhw5zqE3If5RJ_0zsb5vBDP2UuF-vLvLTk3Zh5Zd2Qh_jHsm0E1U75m_klvEA2OC_SKKqptFvUrkR9CMMhSOK5scfQ5rSRrILvTk14ZRsgV5YYn56aqUp/s400/Untitled+drawing.png" width="400" /></a></div>
<br />
This seems fairly typical: at some point you branch for a specific version. During your development it turns out that you need a changeset (revision 1) on your default branch, so you cherry-pick that revision. Later you need to merge all 1.0 development into default, so you do a merge. Now what happens?<br />
<br />
<br />
<div>
To setup the branches, we reuse this shell function in the following examples.</div>
<div>
<br /></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;">function init</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">{</span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;"> # initialize a repository</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg init test1</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> cd test1</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> echo "foo" > file.txt</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg add file.txt</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg commit -m "first commit"</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> # make a branch and make two consecutive changes</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg branch 1.0</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> echo "bar" >> file.txt</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg commit -m "bar"</span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;"> echo "fred" >> file.txt</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg commit -m "fred"</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">}</span></div>
</div>
<br />
<br />
<h3 style="text-align: left;">
Option 1: Manually Making Changes</h3>
<div>
With this option the developer adds the line 'bar' manually to both branches. You would expect a merge conflict here, because the system really has no information that these changes are correlated, and perhaps should not assume that they are.</div>
<div>
<br /></div>
<div>
This scenario can be executed as follows:</div>
<div>
<br /></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;">function sample1</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">{</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> init</span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;"> hg up default</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> echo "bar" >> file.txt</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg commit -m "bar"</span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;"> hg merge 1.0 || echo "conflict expected"</span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;">}</span></div>
</div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: inherit;">You will see that Hg nicely reports the merge failure:</span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">merging file.txt failed!</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">0 files updated, 0 files merged, 0 files removed, 1 files unresolve</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: inherit;">Of course, applying a diff using diff/patch will result in the same result, because it does not matter how you modify the files.</span></div>
<h3 style="text-align: left;">
<br /></h3>
<h3 style="text-align: left;">
Option 2: Mercurial Import/Export</h3>
<div>
The second option is to export the changeset using hg export, and import it using hg import. The hope here would be that Mercurial would correctly remember the origin of the changeset. To my surprise, it does not.</div>
<div>
<br /></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;">function sample2</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">{</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> init</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg export 1 > bar.diff</span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;"> hg up default</span></div>
</div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg import bar.diff</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;"> hg merge 1.0 || echo "conflict??"</span></div>
<div>
<span style="font-family: 'Courier New', Courier, monospace;">}</span></div>
</div>
<div>
<br /></div>
<div>
Result:</div>
<div>
<span style="font-family: Courier New, Courier, monospace;">merging file.txt failed!</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">0 files updated, 0 files merged, 0 files removed, 1 files unresolve</span></div>
<h3 style="text-align: left;">
<br /></h3>
<h3 style="text-align: left;">
Option 3: Transplant/Graft</h3>
<div>
Puzzled by this result, let's try graft (which is roughly the 2.0 implementation of transplant)</div>
<div>
<br /></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;">function sample3</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">{</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> init</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg up default</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg graft 1</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg merge 1.0 || echo "conflict???"</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">}</span></div>
</div>
<div>
<br /></div>
<div>
<div>
Result:</div>
<div>
<span style="font-family: Courier New, Courier, monospace;">merging file.txt failed!</span></div>
</div>
<div>
<span style="font-family: Courier New, Courier, monospace;">0 files updated, 0 files merged, 0 files removed, 1 files unresolve</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<h3>
Option 4: Clones</h3>
</div>
<div>
People who regularly work with Mercurial will of course immediately see that this is not a problem with clones. In fact, this is how you do distributed development at all. Just for the sake it, here is an example:</div>
<div>
<br /></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;">function sample4</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">{</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg init test1</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> cd test1</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> echo "foo" > file.txt</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg add file.txt</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg commit -m "first commit"</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> cd ..</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg clone test1 test2</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> cd test2</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> echo "bar" >> file.txt</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg commit -m "bar"</span></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> echo "fred" >> file.txt</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg commit -m "fred"</span></div>
</div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg export 1 > ../bar.diff</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> cd ../test1</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg import ../bar.diff</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg pull ../test2</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> hg update</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">}</span></div>
</div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: inherit;">Result:</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">1 files updated, 0 files merged, 0 files removed, 0 files unresolved</span></div>
<div>
<br /></div>
<div>
I'm not really a complete Mercurial guru, so I might be missing something here, but clearly this demonstrates that clones work better for this purpose of cherry-picking + merging. If anybody with deeper understanding of Mercurial branches can present a working example that would be great!</div>
<div>
<br /></div>
</div>
Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-6943366.post-76749634676259063272008-12-04T05:58:00.003-08:002008-12-04T06:03:48.121-08:00Why the JVM Spec defines checkcast for interface types<p>
I'm working on the specification of pointer analysis for Java using Datalog. Basically, a pointer analysis computes for each variable in a program the set of objects it may point to at run-time.
</p>
<p>
For this purpose I need to express parts of the JVM Spec in Datalog as well. As a simple example, the following Datalog rules define when a class is a subclass of another class.
</p>
<blockquote>
<pre>
/**
* JVM Spec:
* - A class A is a subclass of a class C if A is a direct
* subclass of C
*/
Subclass(?c, ?a) <-
DirectSubclass[?a] = ?c.
/**
* JVM Spec:
* - A class A is a subclass of a class C if there is a direct
* subclass B of C and class A is a subclass of B
*/
Subclass(?c, ?a) <-
Subclass(?b, ?a),
DirectSubclass[?b] = ?c.
</pre>
</blockquote>
<p>
As you can see, this is remarkably close to the original specification (quoted in comments). You can clearly see the relationship between the spec and the code, even if you are not familiar with Datalog.
</p>
<p>
Recently, I was working on the specification of the <code>checkcast</code> instruction. This instruction performs the run-time check if an object can be cast to some type. The <a href="http://java.sun.com/docs/books/jvms/second_edition/html/Instructions2.doc2.html">JVM Spec</a> for checkcast first defines some variables:
</p>
<blockquote>
The following rules are used to determine whether an objectref that
is not null can be cast to the resolved type: if S is the class of
the object referred to by objectref and T is the resolved class,
array, or interface type, checkcast determines whether objectref can
be cast to type T as follows:
</blockquote>
<p>
So, this basically says that we're checking the cast <code>(T)
S</code>.
</p>
<p>
The first rule for this cast is straightforward:
</p>
<blockquote>
If S is an ordinary (nonarray) class, then:
<ul>
<li>If T is a class type, then S must be the same class as T, or a
subclass of T.</li>
<li>If T is an interface type, then S must implement interface
T.</li>
</ul>
</blockquote>
Well, if you're somewhat familiar with Java, or object-oriented
programming, then this part is obvious. Again, the specification in
Datalog is easy:
<blockquote>
<pre>
CheckCast(?s, ?s) <-
ClassType(?s).
CheckCast(?s, ?t) <-
Subclass(?t, ?s).
CheckCast(?s, ?t) <-
ClassType(?s),
Superinterface(?t, ?s).
</pre>
</blockquote>
However, the next alternative in the specification is confusing:
<blockquote>
If S is an interface type, then:
<ul>
<li>If T is a class type, then T must be Object.</li>
<li>If T is an interface type, then T must be the same interface as
S or a superinterface of S.</li>
</ul>
</blockquote>
<p>
The specification is crystal clear, but how can S ever be an interface
type? S is the type of the object that is being cast, and how can an
object ever have a run-time type that is an interface? Of course, the
static type of an expression can be an interface, but we're talking
about the run-time here!
</p>
<p>
I <a href="http://www.google.com/search?hl=en&q=checkcast+%22If+S+is+an+interface+type%22">searched
the web</a>, which only resulted in a few hits. There was one <a href="http://forums-beta.sun.com/thread.jspa?messageID=4335864&tstart=0">question on a Sun forum</a> years ago, where the one answer didn't make a lot of sense.
</p>
<p>
It turns out that this is indeed an `impossible' case. The reason why
this item is in the specification, is because checkcast is recursively
defined for arrays:
</p>
<blockquote>
If S is a class representing the array type SC[], that is, an array of
components of type SC, then:
<ul>
<li>...</li>
<li>If T is an array type TC[], that is, an array of components of
type TC, then one of the following must be true:
<ul>
<li>...</li>
<li>TC and SC are reference types, and type SC can be cast to TC
by recursive application of these rules.</li>
</ul>
</li>
</ul>
</blockquote>
<p>
So, if you have an object of type <code>List[]</code> that is cast to
an <code>Collection[]</code>, then the rules for checkcast get
recursively invoked for the types <code>S = List</code> and <code>T =
Collection</code>. Notice that List is an interface, but an object can
have type List[] at run-time. If have not verified this with the JVM
Spec maintainers, but as far as I can see, this is the only reason why
the rule for interface types is there.
</p>
<p>
Just to show a little bit more of my specifications, here is the rule
for the array case I just quoted from the JVM Spec:
</p>
<blockquote>
<pre>
CheckCast(?s, ?t) <-
ComponentType[?s] = ?sc,
ComponentType[?t] = ?tc,
ReferenceType(?sc),
ReferenceType(?tc),
CheckCast(?sc, ?tc).
</pre>
</blockquote>
<p>
Isn't it beautiful how this <em>exactly</em> corresponds to the formal
specification?
</p>
<p>
Unfortunately, even formal specifications can have errors, so I also
specified a large testsuite that checks the specifications with
concrete code. Here are some of the tests for CheckCast.
</p>
<blockquote>
<pre>
test Casting to self
using database tests/hello/Empty.jar
assert
CheckCast("java.lang.Integer", "java.lang.Integer")
test Casting to superclasses
using database tests/hello/Empty.jar
assert
CheckCast("java.lang.Integer", "java.lang.Number")
CheckCast("java.lang.Integer", "java.lang.Object")
test Cast ArrayList to various superinterfaces
using database tests/hello/Arrays.jar
assert
CheckCast("java.util.ArrayList", "java.util.List")
CheckCast("java.util.ArrayList", "java.util.Collection")
CheckCast("java.util.ArrayList", "java.io.Serializable")
test Cast class[] to implemented interface[]
using database tests/hello/Arrays.jar
assert
CheckCast("java.util.ArrayList[]", "java.util.List[]")
CheckCast("java.lang.Integer[]", "java.io.Serializable[]")
test Cast interface[] to superinterface[]
using database tests/hello/Arrays.jar
assert
CheckCast("java.util.List[]", "java.util.Collection[]")
</pre>
</blockquote>
<p>
The tests are specified in a little domain-specific language for
unit-testing Datalog that I implemented, initially for <a href="http://www.iris-reasoner.org">IRIS</a> and later for <a href="http://www.logicblox.com">LogicBlox</a>. This tool is similar to
<a href="http://releases.strategoxt.org/strategoxt-manual/unstable/manual/chunk-chapter/tutorial-sdf.html#sdf-unit-testing">parse-unit</a>,
a tool I wrote earlier for testing parsers in <a href="http://www.strategoxt.org">Stratego/XT</a>. The concise syntax
of a test encourages you to write a lot of tests. Domain-specific
languages rock for this purpose!
</p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-6943366.post-79647129936972703742008-06-30T21:49:00.004-07:002008-06-30T23:02:25.580-07:00New Conference on Software Language Engineering<p>
This year is special. There is a new and exciting conference: the <a href="http://planet-sl.org/sle2008/">International Conference on Software Language Engineering (SLE)</a>. The deadline for submission of papers is July 14th, which is coming up soon! Before I start raving about the topics covered by this conference, here is the disclaimer: I'm on the program committee of this conference, and as such I believe it's my duty to advertise the conference.
</p>
<p>
Anyway, if done right, this conference has the potential to become a major and prestigious conference. The conference fills a clear gap: the topics of software language engineering do not exactly fit in major programming language conferences like OOPSLA, PLDI, POPL, and ECOOP. Nor do they fit exactly in the area of compiler construction (CC). CC does typically not accept more engineering or methodology-oriented papers. For OOPSLA and ECOOP the work more or less has to be in the context of object-oriented programming, for POPL it immediately has to be a principle (whatever that is), and for PLDI there are usually just a few slots available for papers that don't do something with memory management, garbage collection, program analysis, or concurrency. Personally, I've been pretty successful at getting papers in the area of software language engineering accepted at OOPSLA, but a full conference devoted to this topic is much better!
</p>
<p>
Another reason why I think that this conference has a lot of potential is that if I look at the list of topics of interest in the <a href="http://planet-sl.org/sle2008/index.php?option=com_content&task=view&id=4&Itemid=4">call for papers</a>, then I can only think of one summary: everything that's fun! I'm convinced I'm not the only one who thinks these topics are fun. When talking to colleagues, I notice again and again most of us just love languages. The engineering of those languages is an issue for almost all computer scientists and many programmers in industry, and this conference will be the most obvious target for papers about this!
</p>
<p>
Also, the formalisms and used for the specification and implementation of (domain-specific) languages are still very much an open research topic. Standardization of languages is still far from perfect, as discussed by many posts on this blog. Also, new language implementation techniques are being proposed all the time, and extensible compilers for developing language extensions are more popular than ever. Not to mention the increasing interest in using domain-specific languages to help solve the software development problems we're facing.
</p>
<p>
Earlier in this post I wrote that this conference has major potential <em>if done right</em>. There are few risks. First, the conference has been started by two relatively small communities: ATEM and LDTA. I think the conference should attract a much larger community than the union of those two communities. I hope lots of people outside of the ATEM and LDTA communities will consider to submit a paper. Second, this year the conference is co-located with MODELS. Many programming language people are slightly allergic to model-driven engineering. I hope they will realize that this conference is <em>not</em> specifically a model-driven conference. Finally, the whole setup of the conference should be international and varied. I'm sorry to say that at this point I'm not entirely happy with the choice of keynote speakers. This nothing personal: I respect both keynote speakers, but the particular combination of the two speakers is a bit unfortunate. First, they are both Dutch. Second, neither of them is extremely well-known in the communities of OOPSLA, PLDI, or ECOOP. I hope that this will not affect the potential of this interesting conference.
</p>
<p>
Now go work on your submission!
</p>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-6943366.post-16138807182423047382008-01-20T01:57:00.000-08:002008-01-20T03:08:00.422-08:00Ph.D. Thesis: Exercises in Free Syntax<p>
It has been awfully quiet here, I'm sorry about that. There are a few reasons for that. The first one is that I assembled my PhD thesis from my publications. This took quite some time and energy, but the result is great! My dissertation <a href="http://martin.bravenboer.name/thesis.html">Exercises Free Syntax</a> is available online. If you are interested in having dead tree version, just let me know!
</p>
<p>
I will defend my thesis tomorrow, January 21 (see the Dutch <a href="http://applicaties.csc.uu.nl/uupona/bekijkpromotie.cfm?npromotieid=1972">announcement</a>). It's weird to realize that tomorrow is the accumulation of 4 years of working intensely!
</p>
<p>
For the library I created an English abstract. To give you an idea what the thesis is about, let me quote it here:
</p>
<blockquote>
<p>
In modern software development the use of multiple software languages
to constitute a single application is ubiquitous. Despite the
omnipresent use of combinations of languages, the principles and
techniques for using languages together are ad-hoc, unfriendly to
programmers, and result in a poor level of integration. We work
towards a principled and generic solution to language extension by
studying the applicability of modular syntax definition, scannerless
parsing, generalized parsing algorithms, and program transformations.
</p>
<p>
We describe MetaBorg, a method for providing concrete syntax for
domain abstractions to application programmers. Since object-oriented
languages are designed for extensibility and reuse, the language
constructs are often sufficient for expressing domain abstractions at
the semantic level. However, they do not provide the right
abstractions at the syntactic level. The MetaBorg method consists of
embedding domain-specific languages in a general purpose host language
and assimilating the embedded domain code into the surrounding host
code. Instead of extending the implementation of the host language,
the assimilation phase implements domain abstractions in terms of
existing APIs leaving the host language undisturbed.
</p>
<p>
We present a solution to injection vulnerabilities. Software written
in one language often needs to construct sentences in another
language, such as SQL queries, XML output, or shell command
invocations. This is almost always done using unhygienic string
manipulation. A client can then supply specially crafted input that
causes the constructed sentence to be interpreted in an unintended
way, leading to an injection attack. We describe a more natural style
of programming that yields code that is impervious to injections by
construction. Our approach embeds the grammars of the guest languages
into that of the host language and automatically generates code that
maps the embedded language to constructs in the host language that
reconstruct the embedded sentences, adding escaping functions where
appropriate.
</p>
<p>
We study AspectJ as a typical example of a language conglomerate,
i.e. a language composed of a number of separate languages with
different syntactic styles. We show that the combination of the
lexical syntax leads to considerable complexity in the lexical states
to be processed. We show how scannerless parsing elegantly addresses
this. We present the design of a modular, extensible, and formal
definition of the lexical and context-free aspects of the AspectJ
syntax. We introduce grammar mixins, which allows the declarative
definition of keyword policies and combination of extensions.
</p>
<p>
We introduce separate compilation of grammars to enable deployment of
languages as plugins to a compiler. Current extensible compilers focus
on source-level extensibility, which requires users to compile the
compiler with a specific configuration of extensions. A compound
parser needs to be generated for every combination. We introduce an
algorithm for parse table composition to support separate compilation
of grammars to parse table components. Parse table components can be
composed (linked) efficiently at runtime, i.e. just before
parsing. For realistic language combination scenarios involving
grammars for real languages, our parse table composition algorithm is
an order of magnitude faster than computation of the parse table for
the combined grammars, making online language composition feasible.
</p>
</blockquote>
<p>
Also, they asked me for a Dutch, non-technical summary for news websites. For my Dutch readers:
</p>
<blockquote>
<p>
We presenteren een verzameling van methoden en technieken om
programmeertalen te combineren. Onze methoden maken het bijvoorbeeld
mogelijk om in een programmeertaal die ontworpen is voor algemene
doeleinden een subtaal te gebruiken die beter aansluit bij het domain
van een bepaald onderdeel van een applicatie. Hierdoor kan een
programmeur op een duidelijkere en compactere wijze een aspect van de
software implementeren.
</p>
<p>
Op basis van dezelfde technieken presenteren we een methode die
programmeurs beschermt tegen fouten die de oorzaak zijn van het meest
voorkomende beveiligingsprobleem, een zogenaamde injectie aanval. Door
op een iets andere wijze te programmeren, heeft de programmeur de
garantie dat de software niet gevoelig is voor dergelijke
aanvallen. In tegenstelling tot eerder voorgestelde oplossingen geeft
onze methode absolute garanties, is eenvoudiger voor de programmeur,
en kan gebruikt worden voor alle gevallen waarin injectie aanvallen
kunnen voorkomen (bijvoorbeeld niet specifiek voor de taal SQL).
</p>
<p>
Tot slot maken onze technieken het mogelijk om de syntaxis van sommige
programmeertalen duidelijker en formeler te definieren. Sommige
moderne programmeertalen zijn eigenlijk een samensmelting van
verschillende subtalen (zogenaamde taalagglomeraten). Van dergelijke
talen was het tot nu toe onduidelijk hoe de syntaxis precies
geformuleerd kon worden, wat voor standaardisering en compatibiliteit
noodzakelijk is.
</p>
</blockquote>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6943366.post-35238453203014898242007-04-03T07:13:00.000-07:002007-04-03T07:19:33.547-07:00LDTA'07 slides on Grammar Engineering ToolsThe <a href="http://martin.bravenboer.name/docs/ldta07-slides.pdf">slides</a> of our presentation of the LDTA'07 paper <a href="http://martin.bravenboer.name/docs/ldta07.pdf">Grammar Engineering Support for Precedence Rule Recovery and Compatibility Checking</a> are now available online. The slides are a spectacular demonstration of latex masochism, so please take a look ;) . There are few bonus slides after the conclusion that I wasn't able to show during the 30-minutes version of the talk.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-6943366.post-26117483898555520082007-04-03T06:57:00.000-07:002007-04-03T08:00:54.504-07:00Migration of the YACC grammar for PHP to SDF<p>
Last summer, <a href="http://ericbouwers.blogspot.com">Eric Bouwers</a> started working on infrastructure for PHP program transformation and analysis, sponsored by the Google Summer of Code. He did an excellent job, thanks to his expertise in PHP and his thorough knowledge of <a href="http://www.strategoxt.org">Stratego/XT</a>. To enjoy all the language engineering support in Stratego/XT, Eric developed a PHP grammar in SDF, the grammar formalism that is usually applied in Stratego/XT projects. Unfortunately it proved to be very difficult to get the grammar of PHP right.
</p>
<h2>PHP precedence problems</h2>
<p>
PHP features many operators, and the precedence of the operators is somewhat unusual and challenging for a grammar formalism. For example, PHP allows the weak binding assignment operator as an argument of the binary, strong binding <code>&&</code> operator:
</p>
<pre>
if ($is_upload && $file = fopen($fname, 'w')) {
...
}
</pre>
<p>
The same holds for the unary, strong binding <code>!</code> operator:
</p>
<pre>
if(!$foo = getenv('BAR')){
...
}
</pre>
<p>
A similar precedence rule for the <code>include</code> operator allows an <code>include</code> to occur as the argument of the strong binding <code>@</code> operator:
</p>
<pre>
@include_once 'Var/Renderer/' . $mode . '.php'
</pre>
<h2>Precedence rule recovery</h2>
<p>
The most serious problem was to find out what the exact precedence rules of PHP operators are. The syntax of PHP is defined by a YACC grammar, which has a notion of precedence declarations that is heavily used by the PHP grammar. Unfortunately, for more complex grammars it is far from clear what the exact effect of the precedence declarations are. The precedence declarations are only used for conflict resolution in the parse table generator, so if there is no conflict, then the precedence declarations do not actually have any effect on a particular combination of operators. That's why we developed support for recovering precedence rules from YACC grammars, which I already wrote about in a <a href="http://mbravenboer.blogspot.com/2007/01/grammar-engineering-im-loving-it.html">previous blog</a>. Based on these tools, we now have a very precise specification of the precedence rules of PHP.
</p>
<p>
The next step in the process of getting a perfect PHP grammar was to actually use this specification to develop very precise precedence declarations for the SDF grammar of PHP. However, the precedence rule specification involves about 1650 rules, so migrating these precedence rules to SDF precedence declarations by hand is not really an option. Fortunately, all the ingredients are actually there to <em>generate</em> SDF priority declarations from the precedence rules that we recover from the YACC grammar.
</p>
<h2>Argument-specific priorities</h2>
<p>
Thanks to two new features of SDF, these precedence rules can be translated directly to SDF. The first feature is argument-specific priorities. In the past, SDF only allowed priority declarations between productions. For example, the SDF priority
</p>
<pre>
E "*" E -> E > E "+" E -> E
</pre>
<p>
defines that the production for the <code>+</code> operator cannot be applied to produce any of the <code>E</code> arguments of the production for the <code>*</code> operator, hence the production for the addition operator cannot be applied on the left-hand side or right-hand side of the multiplication operator. This priority implies that the multiplication operator binds stronger than the addition operator. This single SDF priority corresponds to the following <em>two</em> precedence rules in the grammar formalism independent notation we are using in the <a href="http://www.stratego-language.org/Stratego/GrammarEngineeringTools">Stratego/XT Grammar Engineering Tools</a>:
</p>
<pre>
<E -> <E -> E + E> * E>
<E -> E * <E -> E + E>>
</pre>
<p>
For many languages precedence rules are different for arguments of the same production. That's why we us the more specific representation of a precedence rules in our grammar engineering tools. Fortunately, SDF now supports argument-specific priorities as well. These argument-specific priorities are just plain numbers that indicate to which arguments of a production the priority applies. For example, the following SDF priority forbids the assignment operator only at the left-most and the right-most <code>E</code> of the conditional operator:
</p>
<pre>
E "?" E ":" E -> E <0,4> > E "=" E
</pre>
<p>
This corresponds to the following precedence rules:
</p>
<pre>
<E -> <E -> E = E> ? E : E>
<E -> E ? E : <E -> E = E>>
</pre>
<h2>Non-transitive priorities</h2>
<p>
The second new SDF feature that is required for expressing the PHP precedence rules is non-transitive priorities. Before the introduction of this feature, all SDF priorities where transitively closed. For example, if there are two separate priorities
</p>
<pre>
"!" E -> E > E "+" E -> E
E "+" E -> E > V "=" E -> E
</pre>
<p>
then by the transitive closure of priorities this would imply the priority
</p>
<pre>
"!" E -> E > V "=" E -> E
</pre>
<p>
This transitive closure feature is useful in most cases, but for some languages (such as PHP) the precedence rules are in fact not transitively closed, which makes the definition of these rules in SDF slightly problematic. For this reason, SDF now also features non-transitive priorities, using a dot before the <code>></code> of the priority:
</p>
<pre>
"!" E -> E .> E "+" E -> E
</pre>
<p>
Non-transitive priorities will not be included in the transitive closure, which gives you very precise control over the precedence rules.
</p>
<h2>Precedence rule migration</h2>
<p>
Thanks to the position-specific, non-transitive priorities of SDF, the precedence rules that we recover from the YACC grammar for PHP can now be mapped directly to SDF priority declarations. The two precedence rules mentioned earlier:
</p>
<pre>
<E -> <E -> E + E> * E>
<E -> E * <E -> E + E>>
</pre>
<p>
now translate directly to SDF priorities:
</p>
<pre>
E * E -> E <0> .> E + E -> E
E * E -> E <2> .> E + E -> E
</pre>
<p>
The migration of the recovered YACC precedence rules results in about 1650 of these SDF priorities, but thanks to the fully automatic migration this huge number of priorities is not really a problem. The resulting PHP syntax definition immediately <a href="https://bugs.cs.uu.nl/browse/PSAT-55">solved</a> <a href="https://bugs.cs.uu.nl/browse/PSAT-58">all</a> the <a href="https://bugs.cs.uu.nl/browse/PSAT-49">known</a> <a href="https://bugs.cs.uu.nl/browse/PSAT-53">issues</a> with the PHP syntax definition, which shows that this migration was most reliable and successful.
</p>
<h2>Future</h2>
<p>
There is a lot of interesting work left to be done. First, it would be interesting to develop a more formal grammar for PHP, similar to the grammars of the C, Java, and C# specifications. These specifications all encode the precedence rules of the operators in the production rules, by introducing non-terminals for all the precedence levels. It should not be too difficult to automatically determine such an encoding from the precedence rules we recover. This would result in a formal specification of the PHP syntax, which will benefit many other parser generators. One of the remarkable things we found out is that the unary <code>-</code> operator has the same precedence as the binary <code>-</code> (usually it binds stronger), which results in <code>-1 * 3</code> being parsed as <code>-(1 * 3)</code>. We have not been able to find an example where this strange precedence rule results in unexpected behaviour, but for the development of a solid parser is it essential that such precedence rules are defined precisely.
</p>
<p>
Second, it would be good to try to minimize the number of generated SDF priorities by determining a priority declaration that you can actually oversee as a human. This would involve finding out where the transitive closure feature of SDF priorities can be used to remove redundant priority declarations.
</p>
<p>
Third, it would great to integrate the precedence rule migration in a tool that completely migrates a YACC/FLEX grammar to SDF. For this, we need tools to parse and understand a FLEX specification and extend the existing support for precedence rule migration to other YACC productions.
</p>
<p>
Clearly, there is lots of interesting (and useful!) grammar engineering work to do in this direction!
</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6943366.post-3693070052064275802007-03-01T08:18:00.000-08:002007-03-02T16:47:17.454-08:00x86-64 support for Stratego/XT!<p>
Today is the day that <a href="http://www.strategoxt.org">Stratego/XT</a> supports 64-bit processors! Stratego/XT supports x86-64 from release <a href="http://buildfarm.st.ewi.tudelft.nl/releases/strategoxt/strategoxt-0.17M3pre16744/">0.17M3pre16744</a> (<a href="http://buildfarm.st.ewi.tudelft.nl/releases/strategoxt/strategoxt-unstable-latest/">or later</a>), the sdf2-bundle from release <a href="http://buildfarm.st.ewi.tudelft.nl/releases/meta-environment/sdf2-bundle-2.4pre212034-sqzzbkp3/">2.4pre212034</a> (<a href="http://buildfarm.st.ewi.tudelft.nl/releases/meta-environment/sdf2-bundle-unstable-latest/">or later</a>). The releases are available from our new <a href="http://buildfarm.st.ewi.tudelft.nl/releases">Nix buildfarm</a> at the TU Delft.
</p>
<h2>Some history</h2>
<p>
About 6 years ago, various people <a href="http://mail.cs.uu.nl/pipermail/stratego-dev/2003q2/000262.html">started</a> <a href="http://mail.cs.uu.nl/pipermail/stratego-dev/2003q2/000266.html">to</a> <a href="http://mail.cs.uu.nl/pipermail/stratego-dev/2003q3/000516.html">complain</a> <a href="http://mail.cs.uu.nl/pipermail/stratego-dev/2003q3/000517.html">about</a> <a href="http://mail.cs.uu.nl/pipermail/stratego/2003q4/000080.html">the</a> <a href="http://mail.cs.uu.nl/pipermail/stratego/2005q4/000440.html">lack</a> <a href="http://sjofar.sen.cwi.nl:8080/show_bug.cgi?id=190">of</a> <a href="http://sjofar.sen.cwi.nl:8080/show_bug.cgi?id=354">64-bit</a> <a href="http://sjofar.sen.cwi.nl:8080/show_bug.cgi?id=606">processor</a> support. At the time, most complaints came from our very own Unix geek Armijn Hemel, mostly because of his passion for Sun and these strange UltraSparc machines. However, similar to the limited distribution of Unix geeks, 64-bit systems were rather uncommon at the time. The requests we got were about more obscure processors, like Sun's UltraSparc and Intel's IA-64 (Itanium).
</p>
<p>
The 64-bit issues were never solved because <em>(1)</em> we never had a decent 64-bit machine at our disposal, <em>(2)</em> users with 64-bit system were uncommon, and <em>(3)</em> most of the issues were actually not Stratego/XT issues, but problems in the <a href="http://www.cwi.nl/htbin/sen1/twiki/bin/view/Meta-Environment/ATerms">ATerm library</a>, which is not maintained by the Stratego/XT developers.
</p>
<h2>Some first steps</h2>
<p>
However, it is not possible anymore to ignore 64 bit systems: Intel and AMD both sell 64-bit processors for consumers these days. Several users of Stratego/XT already have x86-64 machines, and the only reason why they don't complain en masse is that there is always the option to compile in 32-bit mode (using <code>gcc -m32</code>).
</p>
<p>
At the TU Delft (the new Stratego/XT headquarters), we now have an amazing buildfarm with some <a href="http://blog.eelcovisser.net/index.php?/archives/36-Bootfarm.html">real, dedicated hardware</a> bought specifically for the purpose of building software. At the moment, all our build machines (except for the Mac Minis) have x86-64 processors, so the lack of 64-bit machines is no longer an excuse.
</p>
<p>
Also, the ATerm library now enjoys a few more contributors. Last summer, Eelco Dolstra from Utrecht University created the first complete 64-bit patch for the ATerm library (Meta-Environment issue <a href="http://sjofar.sen.cwi.nl:8080/show_bug.cgi?id=606">606</a>), simply because his <a href="http://nix.cs.uu.nl">Nix package management system</a> uses the ATerm library and portability of Nix is important. Also, Erik Scheffers from the Eindhoven University of Technology has done an excellent job on the development of ATerm branches that support GCC 4.x and 64-bit machines.
</p>
<h2>The final step .. uh, steps</h2>
<p>
As a result, it was now feasible to fully support x86-64 systems. The only thing left for me to do was to use all the right patches and branches and enable an x86-64 build in our buildfarm. At least, that's what I thought ... Well, if you know computer scientists, then you'll also know that they are always far too optimistic.
</p>
<p>
In the end, it took me a day or four to get everything working. This is rather stressful work, I must say. Debugging code that is affected by a mixture of 32-bit assumptions and aliasing bugs introduced by GCC optimizations is definitely <em>not</em> much fun. You can stare at C code for as long as you like, but if the actual code being executed is completely different, then this won't help much. In the end, this little project resulted in quite a few new issues:
</p>
<ul>
<li>
<a href="https://bugs.cs.uu.nl/browse/STR-701">STR-701</a> is a bug that was raised by casting a pointer to an integer in the <code>address</code> strategy of <code>libstratego-lib</code>, which returns an integer representation of the address of an ATerm. The Stratego Library has had this strategy for a long time, and indeed the most natural representation of an address is an integer datatype. Unfortunately, ATerm integers are fixed size, 32-bit integers, hence it cannot be used to represent a pointer of 64 bits. The new representation is a string, which is acceptable for most of the applications of <code>address</code>.
</li>
<li>
<p>
<a href="http://sjofar.sen.cwi.nl:8080/show_bug.cgi?id=720">Meta-Environment issue 720</a> is related to GCC optimizations based on strict alias analysis. In this case, the optimization seems to be applied only in the x86-64 backend of GCC, while the underlying problem is in fact architecture independent.
</p>
<p>
The code that raises this bug applies efficient memory allocation by allocating blocks of objects rather than individual ones. The available objects are encoded efficiently in a linked list, with only a <code>next</code> field. This <code>next</code> field is used for the actual data of the object, as well as the link to next available object. The objects are character classes, having the name <code>CC_Class</code>, which is a typedef for an array of longs. Roughly, the invalid code for adding a node to the linked list looks like this:
</p>
<pre>
struct CC_Node {
struct CC_Node *next;
};
static struct CC_Node *free_nodes = NULL;
void add_node(CC_Class* c) {
struct CC_Node *node = (struct CC_Node *) c;
node->next = free_nodes;
free_nodes = node;
}
</pre>
<p>
The problem with this efficient linked list is that the same memory location is accessed through pointers of different types, in this case a pointer to a <code>CC_Node struct</code> and a pointer to a <code>CC_Class</code>. Hence, the code creates aliases of different types, which is invalid in C (see for example this nice <a href="http://www.cellperformance.com/mike_acton/2006/06/understanding_strict_aliasing.html">introduction to strict aliasing</a>). In this case, C compilers are allowed to assume that the two variables do not alias, which enables a whole bunch of optimizations that are invalid if they do in fact alias.
</p>
<p>
The solution for this is to use a C union, which explicitly informs the compiler that a certain memory location is accessed through two different types. Using a union, the above code translates to:
</p>
<pre>
union CC_Node {
CC_Class *cc;
CC_Class **next;
};
static union CC_Node free_node = {NULL};
void add_node(CC_Class* c) {
node.cc = c;
*(node.next) = free_node.cc;
free_node.cc = node.cc;
}
</pre>
<blockquote>
<p style="font-style: italic; font-size: small;">
Sidenote: I'm not really a C union expert, and I'm not 100% sure whether in this case a union is necessary for a <code>CC_Class*</code> and <code>CC_Class**</code> or <code>CC_Class</code> and <code>CC_Class*</code>. The union I've chosen solves the bug, but I should figure out what the exact solution should be. Feedback is welcome.
</p>
</blockquote>
</li>
<li>
<a href="http://sjofar.sen.cwi.nl:8080/show_bug.cgi?id=718">Meta-Environment issue 718</a> is related to the previous bug. The problem here is that the same memory locations are accessed through a generic datatype (ATerm) as well as pointers to more specific structs, which again leads to strict aliasing problems. This time, the issue has been solved in a more ad-hoc way by declaring a variable as volatile. This solves the issue for now, but a more fundamental solution (probably a union) is necessary here as well.
</li>
<li>
<p>
<a href="https://bugs.cs.uu.nl/browse/STR-705">STR-705</a> adds some checks for the size of various types to the Stratego/XT build system, called Auto/XT. These checks are necessary for the header files of the ATerm library, which determine the characteristics of the platform based on the size of longs, integers, and void pointers, which are defined as macros (a feature that is under discussion in <a href="http://sjofar.sen.cwi.nl:8080/show_bug.cgi?id=606">Meta-Environment issue 606</a>: it does not play very well with cross compilation and compilation in 32-bit mode on a 64-bit platform). The ATerm library we are using at the moment is the branch <code>64-bit-fixes</code>, which has been developed by Eelco Dolstra and Erik Scheffers.
</p>
<p>
The new macro <code>XT_C_TYPE_CHARACTERISTICS</code> checks the sizes and defines the macros that are required by these headers. The macro <code>XT_SETUP</code> invokes the <code>XT_C_TYPE_CHARACTERISTICS</code> macro, so all packages based on Stratego/XT will automatically support the 64-bit branch of the ATerm library.
</p>
</li>
<li>
<a href="https://bugs.cs.uu.nl/browse/STR-703">STR-703</a> is related to the previous issues. In packages based on the GNU Autotools and Auto/XT, the C code is compiled by the Automake-based build system, not by the Stratego compiler itself (which only produces the C code). In this case, the <code>XT_C_TYPE_CHARACTERISTICS</code> takes care of the required defines. However, the Stratego compiler can also be used as a standalone compiler, where <code>strc</code> invokes the C compiler itself. In this case, <code>strc</code> needs to pass the definitions of macros to the C compiler.
</li>
<li>
<p>
<a href="https://bugs.cs.uu.nl/browse/STR-704">STR-704</a> drops the use of autoheader in stratego-libraries. Autoheader replaces the command-line definition of macros to a generated <code>config.h</code>. This generated file used to be installed as <code>stratego-config.h</code>, but this header file is no longer necessary: there is no configuration option in this file that is still necessary as part of the Stratego/XT installation. The mechanism of <code>config.h</code> installation is rather fragile (some macro definitions have to be removed), so if it is not necessary anymore, then why not drop it ...
</p>
<p>
The relation to x86-64 support is that several C files in the stratego-libraries package did not correctly include the generated <code>config.h</code> before <code>aterm2.h</code>. This breaks on x86-64 systems because <code>aterm2.h</code> requires the aforementioned macro definitions.
</p>
</li>
</ul>
<h2>Short version</h2>
<p>
The net result of this operation is that we now support x86-64 systems. And this time we will keep supporting 64-bit processors, whatever it takes.
</p>
<p>
It would be fun to check now if UltraSparc and IA-64 machines work out of the box, but I don't have access to any of these. If you have one, I would love to know if it works.
</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6943366.post-37873886783615156012007-02-14T15:48:00.001-08:002007-02-14T15:54:23.507-08:00Base access in the C# specification<p>
In a <a href="http://mbravenboer.blogspot.com/2007/02/informal-specifications-are-not-so.html">previous post</a>, I discussed a bug in the Java Language Specification on super field access of protected fields. If you haven't read this yet, I would suggest to give it a read before you continue with this post. Thanks to a discussion with <a href="http://blogs.sun.com/abuckley/">Alex Buckley</a> (the new maintainer of the Java Language specification), there is now a proposal to fix this bug in an elegant way. I'll report on the solution and the nice discussion on the relation to super field accesses in bytecode later.
</p>
<p>
However, first I would like to illustrate the risk of reuse. While writing on issues in the Java Language Specification, I figured that the C# specification probably has the same issue. After all, C# features the same <a href="http://mbravenboer.blogspot.com/2006/04/on-details-of-protected-access-in-java.html">details of protected access</a>. Consider the following two C# classes:
</p>
<pre>
class A {
protected int secret;
}
class B : A {
public void f(A a) {
a.secret = 5;
}
}
</pre>
<p>
Due to the details of protected access, this example won't compile. The Mono C# compiler clearly explains the problem:
</p>
<pre>
A.cs(17,5): error CS1540: Cannot access protected
member `A.secret' via a qualifier of type `A'. The
qualifier must be of type `B' or derived from it
</pre>
<p>
Of course, C# also support access to fields of base classes (aka super classes). Indeed, checking the C# specification reveals that the definition of base access is exactly the same as super field access in Java. In Section 14.5.8 of the C# Language Specification (<a href="http://www.ecma-international.org/publications/standards/Ecma-334.htm">ECMA-334</a>), the semantics of a base access expressions is defined in the following way:
</p>
<blockquote>
<em>
"At compile-time, base-access expressions of the form <code>base.I</code> and <code>base[E]</code> are evaluated exactly as if they were written <code>((B)this).I</code> and <code>((B)this)[E]</code>, where <code>B</code> is the base class of the class or struct in which the construct occurs."
</em>
</blockquote>
<p>
Compare this definition to the Java Language Specification:
</p>
<blockquote>
<em>
"Suppose that a field access expression <code>super.name</code> appears within class <code>C</code>, and the immediate super class of <code>C</code> is class <code>S</code>. Then <code>super.name</code> is treated exactly as if it had been the expression <code>((S)this).name</code>."
</em>
</blockquote>
<p>
The good thing about this reuse is that I can reuse the examples of my previous post as well. Consider the following two C# classes that compile without any problem:
</p>
<pre>
class A {
protected int secret;
}
class B : A {
public void f() {
base.secret = 5;
}
}
</pre>
<p>
Next, consider the derivative of this example, where class B has been modified to refer to the field secret using <code>(A) this</code> which is exactly the same as a reference through <code>base</code>, according to the specification.
</p>
<pre>
class B : A {
public void f() {
((A) this).secret = 5;
}
}
</pre>
<p>
Similar to Java, this class won't compile, due to the details of protected access in C#. Again, the Mono C# compiler explains the issue:
</p>
<pre>
A.cs(13,7): error CS1540: Cannot access protected
member `A.secret' via a qualifier of type `A'. The
qualifier must be of type `B' or derived from it
</pre>
<p>
This example shows that for C# the two expressions <code>base.secret</code> and <code>((A) this).secret</code> are not evaluated in the same way, so the previously reported problem in the Java Language Specification also applies to the C# specification.
</p>
<p>
Now I have to figure out how to report issues in the C# specification
...
</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6943366.post-38821476171165654862007-02-10T17:31:00.000-08:002007-02-07T15:45:29.973-08:00Informal specifications are not so super<p>
I'm trying to get back to the good habit of blogging about our work. I'm not very fond of dumping random links or remarks, so the most challenging part of blogging is to find a good topic to write a decent story about. This time, I fallback to a topic I've actually been working on about two years ago, but is still most relevant. At that time, I was actively developing a type checker for Java, as part of the <a href="http://www.stratego-language.org/Stratego/TheDryad">Dryad</a> project. This story is about a bug in the <a href="http://java.sun.com/docs/books/jls/">Java Language Specification</a>, that for whatever bizarre reason has never been reported (afaik).
</p>
<h2>Super field access</h2>
<p>
Java supports access to fields of super classes using the <code>super</code> keyword, even if this field is hidden by a declaration of another field with the same name. For example, the following sample will print <code>super.x = 1 and x = 2</code>.
</p>
<pre>
class S {
int x = 1;
}
class C extends S {
int x = 2;
void print() {
System.out.println("super.x = " + super.x + " and x = " + x);
}
public static void main(String[] ps) {
new C().print();
}
}
</pre>
<p>
To allow access from inner classes to hidden fields of enclosing instances, Java also supports qualified super field accesses. In this case, the <code>super</code> keyword is prefixed with the name of a lexically enclosing class. This feature is related to the qualified <code>this</code> expression, which allows you to refer to an enclosing instance.
</p>
<h2>Current specification</h2>
<p>
We all have a reasonable, though informal, idea what the semantics of this language feature is. Of course for a real specification the semantics has to be defined more precisely. For example, two things that need to be define are what the type of such an expression is and if the field is accessible at all. The specification concisely defines the semantics of this language feature by <em>forwarding</em> the semantic rules to existing, more basic language features. For <code>super.name</code>, the JLS specifies:
</p>
<blockquote>
<em>
"Suppose that a field access expression <code>super.name</code> appears within class <code>C</code>, and the immediate super class of <code>C</code> is class <code>S</code>. Then <code>super.name</code> is treated exactly as if it had been the expression <code>((S)this).name</code>."
</em>
</blockquote>
<p>
So, in the example I gave, <code>super.x</code> would be <em>exactly</em> equivalent to <code>((S)this).x</code>. Obviously, the emphasis of exactly is on purpose. Why would they use this word? Does this suggest that there is also a notion of being treated <em>almost exactly</em> in the same way? ;) .
</p>
<p>
For qualified field access, the specification is almost the same, but this time using a qualified <code>this</code> instead of <code>this</code>.
</p>
<blockquote>
<em>
"Suppose that a field access expression <code>T.super.name</code> appears within class <code>C</code>, and the immediate super class of the class denoted by <code>T</code> is a class whose fully qualified name is <code>S</code>. Then <code>T.super.name</code> is treated exactly as if it had been the expression <code>((S)T.this).name</code>."
</em>
</blockquote>
<p>
This specification looks very reasonable, considering that for a field access only the <em>compile-time type</em> of the subject expression is used to determine which field is to be used. By casting the subject expression (<code>this</code>) to the right type, the expected field of the super class is accessed.
</p>
<h2>Oops</h2>
<p>
Of course, it is always nice to have your type checker as compact as possible, so I was very happy with this specification. I could just forward everything related to super field accesses to the corresponding expression with a cast and a <code>this</code> expression. The Dryad typing rules looked something like this:
</p>
<pre>
attributes:
|[ super.x ]| -> <attributes> |[ ((reft) this).x ]|
where
<em>reft is the superclass of the current class</em>
attributes:
|[ cname.super.x ]| -> <attributes> |[ ((reft) cname.this).x ]|
where
<em>reft is the superclass of the class cname</em>
</pre>
<p>
This implementation looks very attractive, but ... it didn't work. The reason for this is that <code>super.name</code> is in fact <em>not</em> exactly the same as <code>((S)this).name</code>. The reason for this are the details of protected access, which I've <a href="http://mbravenboer.blogspot.com/2006/04/on-details-of-protected-access-in-java.html ">previously</a> written about on my blog. I'm not going to redo that, so let me just give an example (based on the example in the previous post) where this assumed equivalence is invalid. First, the following two classes are valid and can be compiled without any problems:
</p>
<pre>
package a;
public class A {
protected int secret;
}
package b;
public class B extends a.A {
void f() {
super.secret = 5;
}
}
</pre>
<p>
Next, let's now change the assignment to the expression <code>((a.A) this).secret</code>, which is equivalent to <code>super.secret</code> according to the specification.
</p>
<pre>
package b;
public class B extends a.A {
void f() {
((a.A) this).secret = 5;
}
}
</pre>
<p>
Unfortunately, this won't compile, due to the details of protected access:
</p>
<pre>
b/B.java:5: secret has protected access in a.A
((a.A) this).secret = 5;
^
1 error
</pre>
<p>
This example shows that the two expressions are not treated in the same way, so this looks like a problem in the Java Language Specification to me. Also, this shows that in semantics of languages likje Java the devil is really in the detail. What surprises me is that nobody has mentioned this before. Several major Java compilers have been implemented, right? Shouldn't the programmers responsible for these compilers have encountered this problem?
</p>
<h2>Java Virtual Machine</h2>
<p>
Another interesting thing is how the Java Virtual Machine specification deals with this. There is no special bytecode operator for accessing fields of super classes: all field assignments are performed using the <code>putfield</code> operator. Assuming that the source compiler would ignore the protected access problem, the two unequal examples I just gave would compile to exactly the same bytecode. So how can the JVM report an error about an illegal access for the expression <code>((a.A) this).secret</code>? Well, it turns out that it doesn't.
</p>
<p>
We can show this by first making <code>secret</code> public, compile <code>B</code>, then make <code>secret</code> protected, and only recompile <code>A</code>. This works like a charm: doing this trick for the following example prints <code>secret = 5</code>.
</p>
<pre>
package a;
public class A {
protected int secret;
public void print() {
System.out.println("secret = " + secret);
}
}
package b;
public class B extends a.A {
void f() {
((a.A) this).secret = 5;
}
public static void main(String[] ps) {
B b = new B();
b.f();
b.print();
}
}
</pre>
<p>
However, if this would be allowed by bytecode in general, then this would mean that the security vulnerability that was fixed with the details of protected access, would actually only give <em>source</em> level protection. Obviously, that would be no protection at all: you can safely assume that attackers are capable of writing bytecode. So let's try to make the example a bit more adventurous by passing the subject expression to the f method:
</p>
<pre>
package b;
public class B extends a.A {
void f(a.A a) {
a.secret = 5;
}
public static void main(String[] ps) {
B b = new B();
b.f(b);
b.print();
}
}
</pre>
<p>
This time, the verifier reports an error:
</p>
<pre>
Exception in thread "main" java.lang.VerifyError:
(class: b/B, method: f signature: (La/A;)V)
Bad access to protected data
</pre>
<p>
This error report is correct, so apparently the verifier does check for illegal protected access. In the first case, it was just a bit more liberal than the source language. The question is, how is this specified in the Java Virtual Machine specif cation? My first impression was that there might be some special handling of accesses to <code>this</code>. However, this would require the verifier to trace which local variables might have the value of <code>this</code>, which is rather unlikely. Then, Dick Eimers (who did lots of scary bytecode stuff for his master thesis) pointed me to a paper that exactly covers this subject: <a href="http://www.jot.fm/issues/issue_2005_10/article3">Checking Access to Protected Members in the Java Virtual Machine</a> by <a href="http://www.kestrel.edu/home/people/coglio/">Alessandro Coglio</a>. Strange enough, this paper is not cited anywhere, while I think that the discussion of this issue is pretty good.
</p>
<p>
It turns out that the difference in accessibility between super field accesses and ordinary field accesses is handled <em>implicitly</em> thanks to the type inferencer used by Java Virtual Machine. The inferred type of the operand of the field access will be more specific than the type in the corresponding source code, which makes the access to the protected field valid in bytecode. I don't think that this implicit handling of the observed difference is a very good idea.
</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6943366.post-63254661751918446972007-02-06T11:46:00.001-08:002007-02-19T14:28:24.905-08:00Our take on injection attacks<p>
If you haven't been hiding under some really impressive rock for the last decade, then you probably know that injection attacks are a major issue in web applications. The problem of <a href="http://en.wikipedia.org/wiki/SQL_injection">SQL injection</a> is well-known, but you see similar issues <em>everywhere</em>: <a href="http://www.google.com/search?hl=en&q=sql+injection&btnG=Search">SQL</a>, <a href="http://www.google.com/search?hl=en&q=shell+injection&btnG=Search">Shell</a>, <a href="http://www.google.com/search?hl=en&q=xml+injection&btnG=Search">XML</a>, <a href="http://www.google.com/search?hl=en&q=html+injection&btnG=Search">HTML</a>, <a href="http://www.google.com/search?hl=en&q=ldap+injection&btnG=Search">LDAP search filters</a>, <a href="http://www.google.com/search?hl=en&q=xpath+injection&btnG=Search">XPath</a>, <a href="http://www.google.com/search?hl=en&q=xquery+injection&btnG=Search">XQuery</a>, and a whole series of enterprisey query languages, such as <a href="http://www.google.com/search?hl=en&q=hql+injection&btnG=Search">HQL</a>, <a href="http://www.google.com/search?hl=en&q=jdoql+injection&btnG=Search">JDOQL</a>, <a href="http://www.google.com/search?hl=en&q=ejbql+injection&btnG=Search">EJBQL</a>, <a href="http://www.google.com/search?hl=en&q=oql+injection&btnG=Search">OQL</a> are all potential candidates for injection attacks. Just search for any of these languages together with the term injection and observe the horror. Recently, it has also become more popular to mix a program written in <a href="http://java.sun.com/developer/technicalArticles/J2SE/Desktop/scripting/">Java with scripts</a>, usually something like JavaScript, Ruby or Groovy. If you include user input in the script, then this is yet another vector of attack.
</p>
<h2>Solutions?</h2>
<p>
Of course it is possible to just advice programmers to properly escape all user inputs, which prevents most of the injection attacks. However, that's like telling people to do their own <a href="http://en.wikipedia.org/wiki/Garbage_collection_(computer_science)">memory management</a> or to do the dishes every day (which is a particular problem I have). In other words: you won't get it right.
</p>
<p>
Most of the research on injection attacks has focused on finding injection problems in existing source code using static and/or runtime analysis. Usually, this results in tools that check for injection attacks for specific languages (e.g. SQL) in specific host languages (e.g. PHP). This is very important and useful work, since it can easily be applied to detect or prevent injection attacks in existing code bases. However, at some point we just fundamentally need to reconsider the way we program. Why just fight the symptoms if you can fix the problem?
</p>
<p>
So that's what we've done in our latest work called <a href="http://www.stringborg.org">StringBorg</a>. I'm not going to claim that all your injection problems will be over tomorrow, but at least I think that what we propose here gives us some perspective on solving theses issues once and for all in a few years. The solution we propose is to use syntax embeddings of the <em>guest</em> languages (SQL, LDAP, Shell, XPath, JavaScript) in the <em>host</em> language (PHP, Java) and let the system do all the proper <em>escaping</em> and <em>positive checking</em> of user input.
</p>
<h2>Examples</h2>
<p>
The paper I'll mention later explains all the technical details, and I cannot redo that in a better way in a blog, so I'll just give a bunch of examples that illustrate how it works.
</p>
<h4>SQL</h4>
<p>
The first example is an embedding of SQL in Java. This example illustrates how you can insert strings in SQL queries and compose SQL queries at runtime. The first code fragment is the classic, vulnerable, way of composing SQL queries using string concatenation.
</p>
<pre>
String s = "'; DROP TABLE Users; --";
String e = "username = \'" + s + "\'";
String q = "SELECT password FROM Users WHERE " + e;
System.out.println(q);
</pre>
<p>
Clearly, if the string <code>s</code> was provided by the user, then this would result in an injection attack: the final query is <code>SELECT password FROM Users WHERE username = ''; DROP TABLE Users; --'</code>. Bad luck, the <code>Users</code> table is gone! (or maybe you can thank your database administrator).
</p>
<p>
With StringBorg, you can introduce some kind of literal syntax for SQL. The SQL code is written between the quotation symbols <code><|...|></code>. SQL code or strings can be inserted in another SQL query using the syntax <code>${...}</code>. The example would be written in StringBorg as:
</p>
<pre>
String s = "'; DROP TABLE Users; --";
SQL e = <| username = ${s} |>;
SQL q = <| SELECT password FROM Users WHERE ${e} |>;
System.out.println(q.toString());
</pre>
<p>
This will result in the correct query, <code>SELECT password FROM Users WHERE username = '''; DROP TABLE Users; --'</code>, where the single quotes have been escaped by StringBorg according to the rules of the SQL standard (the exact escaping rules depend on the SQL dialect). Not only does the StringBorg solution solve the injection problem, it is also much prettier! This example also shows that it is not required to know the full SQL query at compile-time, for example the actual condition <code>e</code> could be different for two branches of an <code>if</code> statement, or could even be constructed in a <code>while</code> statement.
</p>
<p>
The nice thing about StringBorg is that the SQL support is not restricted to a specific language, in this case Java. For PHP, you can do exactly the same thing:
</p>
<pre>
$s = "'; DROP TABLE Users; --";
$e = <| username = ${$s} |>;
$q = <| SELECT password FROM Users WHERE ${$e} |>;
echo $q->toString(), "\n";
</pre>
<h4>LDAP</h4>
<p>
Using user input in LDAP search filters has very similar injection problems. First a basic example, where there is no problem with the user input:
</p>
<pre>
String name = "Babs Jensen";
LDAP q = (| (cn=$(name)) |);
System.out.println(q.toString());
</pre>
<p>
The resulting LDAP filter will be <code>(cn=Babs Jensen)</code>, which is what you would except. If the string has the value <code>Babs (Jensen)</code>, then the parentheses need to be escaped. Indeed, StringBorg will produce the filter <code>(cn=Babs \28Jensen\29)</code>. This input might have been an accident, but of course we can easily change this into a real injection attempt by using the string <code>*</code>. Again, StringBorg will properly escape this, resulting in the query <code>(cn=\2a)</code>.
</p>
<h4>Shell</h4>
<p>
Programs that invoke shell command could be vulnerable to injection attacks as well (as the TWiki developers and users <a href="http://www.google.com/search?hl=en&q=twiki+shell+injection&btnG=Search">have learned the hard way</a>). Similar to the other examples, StringBorg introduces a syntax to construct shell commands, and escape strings:
</p>
<pre>
Shell cmd = <| /bin/echo svn cat http://x -r <| s |> |>;
System.out.println(cmd.toString());
</pre>
<p>
If <code>s</code> has the values <code>bravo</code>, <code>foo
bar</code>, <code>*</code> and <code>; echo pwn3d!</code>
respectively, then the resulting commands are:
</p>
<pre>
/bin/echo svn cat http://x -r bravo
/bin/echo svn cat http://x -r foo\ bar
/bin/echo svn cat http://x -r \*
/bin/echo svn cat http://x -r \;\ echo\ pwn3d\!
</pre>
<h4>JavaScript</h4>
<p>
Not only does StringBorg prevent injection attacks, it also makes composing SQL, XQuery, JavaScript, etc more attractive: you don't have to concatenate all these nasty strings anymore. For example, the following example taken from and article on the new Java scripting support is just plain ugly:
</p>
<pre>
jsEngine.eval(
"function printNames1(namesList) {" +
" var x;" +
" var names = namesList.toArray();" +
" for(x in names) {" +
" println(names[x]);" +
" }" +
"}" +
"function addName(namesList, name) {" +
" namesList.add(name);" +
"}"
);
</pre>
<p>
whereas this looks quite reasonable:
</p>
<pre>
jsEngine.eval(|[
function printNames1(namesList) {
var x;
var names = namesList.toArray();
for(x in names) {
println(names[x]);
}
}
function addName(namesList, name) {
namesList.add(name);
}
]| );
</pre>
<p>
Of course, this would be easy to fix by introducing multi-line string literals in Java, but in addition to the nicer syntax, you get protection against injection attacks and compile-time syntactic checking of the code for free!
</p>
<h2>Generic, generic, generic</h2>
<p>
Now, if you are familiar with our work, then this solution won't really surprise you, since we have been working on syntax embeddings for some time now (although in different application areas, such as meta programming). However, this work is quite a fundamental step towards making these syntax embeddings easier to use by ordinary programmers. First, the system now supports ambiguities, which always was the weak point of our code generation work: if you don't support ambiguities, then the programmer needs to be familiar with the details of the grammar of the guest language, which you really don't want. Fortunately, this is now a technical detail that you now can forget about! Second, <em>no meta-programming</em> is required at all to add a new guest language (e.g. XPath) to the system. All you need to do is define the syntax of the language, define the syntax of the embedding, and optionally define escaping rules for strings and you're all set. Thus, compared to our previous work on <a href="http://www.stratego-language.org/Stratego/ConcreteSyntaxForObjects">MetaBorg (OOPSLA '04)</a>, there is no need for implementing the mapping from the syntax of the guest language to code in the host language.
</p>
<p>
This is a pretty amazing property: basically, this means that you can just use <em>languages</em> as <em>libraries</em>. You can just pick the languages you want to use in a source file and that's it! No difficult meta-programming stuff, no program transformation, no rewrite rules and strategies, no limitations. In fact, this even goes beyond libraries: libraries are always language specific (for example for Java or PHP), but the implementation of support for a guest language (e.g. SQL) is <em>language independent</em>. This means that if some person or company implements support for a guest language (e.g. SQL) then <em>all</em> host languages (Java, PHP, etc) are immediately supported.
</p>
<h2>Future?</h2>
<p>
The paper we wrote about this is titled <em>"Preventing Injection Attacks with Syntax Embeddings. A Host and Guest Language Independent Approach"</em> and is now available as <a href="http://swerl.tudelft.nl/twiki/pub/Main/TechnicalReports/TUD-SERG-2007-003.pdf">technical report</a>. Last week, we submitted this paper to the <a href="http://www.usenix.org/events/sec07/index.html">USENIX Security Symposium</a>. We won't know if the paper is accepted until April 4, but I would be flabbergasted if it got rejected ;) . Our prototype implementation, called <a href="http://www.stringborg.org">StringBorg</a>, is available as well. I'm looking forward to your feedback and opinions. I'll add some examples to the webpage later this week, so make sure to come back!
</p>
<p>
<a href="http://blogs.sun.com/ahe">Peter Ahé</a> already has a general solution for embedding foreign languages on his <a href="http://blogs.sun.com/ahe/entry/java_se_7_wish_list">wish list</a> (as opposed to an XML specific solution), so could this actually be happening in the near future?
</p>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-6943366.post-38679445236595619262007-02-04T13:41:00.000-08:002007-02-04T13:53:54.657-08:00Some random thoughts on the complexity of syntax<p>
For some reason I was invited to join the <a href="https://lists.csail.mit.edu/mailman/listinfo/jsr308">JSR-308 mailing list</a>, which is a public mailing list discussing <a href="http://jcp.org/en/jsr/detail?id=308">JSR-308</a>. I've reported some problems with the Java grammar of the Java Language Specification in the past, so maybe I'm now on some list of people that might be interested. I'm not sure if I will contribute to the discussion, but at least lurking has been quite interesting. If you're not familiar with the proposal, JSR-308 has been started to allow annotations at more places in a Java program. The title of the JSR is "Annotations on Java Types", but the current discussion seems to interpret the goal of the JSR a bit more ambitiously, since there is a lot of talk going on about annotations of statements, and even expressions. I don't have a particularly strong opinion on this, but it's interesting to observe how the members of the list are approaching this change in the language.
</p>
<p>
<a href="http://www.gafter.com/~neal/">Neal Gafter</a> seems to represent the realistic camp in the discussion (not to call his opinion conservative). Neal was once the main person responsible for the Java Compiler at Sun, so you can safely assume that he knows what he's talking about. Together with Joshua Bloch, he is now mainly responsible for the position of Google in these Java matters. Last week, he sent another interesting message to the list: <a href="https://lists.csail.mit.edu/pipermail/jsr308/2007-February/000083.html">Can we agree on our goals?</a>. As I mentioned, I don't have a very strong opinion on what the goal of the JSR should be, but Neal raised a point about syntax that reminded me again of some thoughts on syntax that have been lingering in my mind for some time now. Neal wrote:
</p>
<blockquote>
<em>"I think the right way to design a language with a general annotation facility is to support (or at least consider supporting) a way of annotating every semantically meaningful nonterminal. Doing that requires a language design with a very simple syntax. Java isn't syntactically simple, and I don't think there is anything we can do it make it so. If we wanted to take this approach with Java we'd have to come up with a syntactic solution for every construct that we want to be annotatable. Given the shape of the Java grammar, that solution would probably be a different special case for every thing we might want to annotate."</em>
</blockquote>
<p>
Whether you like it or not, this is a most valid concern. The interesting point about this annotation thing is that it is a language feature that applies in a completely different way to existing language constructs. Adding an expression, a statement, or some modifier to the language is not difficult to do, since this adds only an <em>alternative</em> to the existing structure of the language. Annotations, on the other hand, do not add just another alternative, but crosscut (sorry, I couldn't avoid the term) the language. If you are an annotation guy, then you want to have them everywhere, since you essentially want to add information to arbitrary language constructs. Now this is quite a problem if you have a lot of language constructs, not alternative language constructs, but distinct <em>kinds</em> of language constructs (of course known as nonterminals to grammar people). This would be trivial to do in language where there are not that many language constructs, such as Lisp and Scheme, and even model-based languages.
</p>
<p>
This makes you wonder what is a good language syntax. Should adding such a crosscutting language feature be easy? Conceptually, it is beyond any doubt attractive to have a limited number of language constructs, but on the other hand it is very convenient that Java has this natural syntax for things like modifiers, return types, formal parameters, formal type parameters, throws clauses, array initializers, multiple variable declarations at the same line, and so on. If you want to add annotations to all these different language constructs, then you basically have to <em>break</em> their abstraction, which suddenly makes them look unnatural, since it becomes clear that a syntactical construct that used to be easy to read, has some explicit semantic meaning. That is, the entire structure of the language is exposed in this way. It is no longer possible to <em>read</em> a program, abstracting over all the details of the language. Also, for several locations it is very unclear to the reader where an annotation refers to. For example, the current <a href="http://pag.csail.mit.edu/jsr308/java-annotation-design.html">draft specification</a> states that
</p>
<blockquote>
<em>
"There is no need for new syntax for annotations on return types, because Java already permits an annotation to appear before a method return type. Currently, such annotations are interpreted as on the method declaration — for example, the @Deprecated annotation indicates that the method is deprecated. The person who defines the annotation decides whether an annotation that appears before the return value applies to the method declaration or to the return type.
</em>
</blockquote>
<p>
Clearly, there is a problem in this case, since an annotation in the header of the method could refer to several things. The reason for this, is the syntactical conciseness of the language for method declarations: you don't have to identify every part explicitly, hence if you want to annotate some part only, then you have a problem. Moving that decision to the declaration side of the annotation is a not an attractive solution, for example there will be annotations that are applicable to both declarations and types.
</p>
<p>
This all brings us to the question how to determine if a syntax of a programming language is simple? Is that really just some subjective idea, or is it possible to determine this semi-automatically with more objective methods? I assume that the answer depends on the way the language is applied. For example, in program transformation it is rather inconvenient to have all kinds of optional clauses for a language construct. This reminds me of a <a href="http://blog.nicksieger.com/articles/2006/10/27/visualization-of-rubys-grammar">post</a> by Nick Sieger, who applied a visualization tool to some grammars. For some reason, this post was very popular and was discussed all over the web, including <a href="http://lambda-the-ultimate.org/node/1849">Lambda the Ultimate</a> and <a href="http://lwn.net/Articles/206533/">LWN</a>. However, most people seemed to agree that the visualizations did not tell much about the complexity of the languages. Indeed, the most visible aspects of the pictures are the <em>encodings</em> of the actual grammar that had to be applied to make the grammar non-ambiguous or to fit in the used grammar formalism. For example, the encoding of precedence rules for expressions makes the graph look pretty, but conceptually this is just a single expression. As a first guess, I would expect that some balance between the number of nodes and edges would be a better measurement: lots of edges to a single node, means that nonterminal is allowed at a lot of places, which is probably good for the orthogonality of the language (more people have been claiming this in the discussion about these visualizations).
</p>
<p>
But well, this makes you wonder if there has been any research on this. The only work I'm familiar with is <a href="http://wiki.di.uminho.pt/wiki/bin/view/PURe/SdfMetz">SdfMetz</a>, which is a metrics tool for <a href="http://www.syntax-definition.org">SDF</a> developed by <a href="http://wiki.di.uminho.pt/twiki/bin/view/Joost">Joost Visser</a> and <a href="http://wiki.di.uminho.pt/wiki/bin/view/Main/TiagoAlves">Tiago Alves</a>. SDF grammars are usually closer to the intended design of a language than LR or LL grammars, so if you are interested in the complexity of the syntax of a language, then using SDF grammars sounds like a good idea. SdfMetz supports quite an interesting list of metrics. Some are rather obvious (count productions etc), but there are also some more complex metrics. I'm quite sure that (combinations of) these metrics can give some indication of the complexity of a language. Unfortunately, the work on SdfMetz was not mentioned at all in the discussion of these visualizations. Why is it that a quick and dirty blogpost is discussed all over the web and solid research does not get mentioned? Clearly, the SdfMetz researchers should just post a few fancy pictures for achieving instant fame ;) . Back to the question what is a good syntax, they have mostly focused on the facts until now (see their paper <a href="http://wiki.di.uminho.pt/wiki/pub/PURe/PurePublications/DI-PURe-05-05-01.pdf">Metrication of SDF Grammars</a>), and have not done much work on <em>interpreting</em> the metrics they have collected. It would be interesting if somebody would start doing this.
</p>
<p>
Joost Visser and Tiago Alves will be presenting SdfMetz at <a href="http://www.di.uminho.pt/ldta07/">LDTA 2007</a>, the Workshop on Language Descriptions, Tools and Application (program now available!). As I mentioned in my previous post, we will be presenting our work on precedence rule recovery and compatibility checking there as well. So, if you are in the neighbourhood (or maybe even visiting ETAPS), then make sure to drop by if you are interested!
</p>
<p>
Another thing that finally seems to get some well-deserved attention is ambiguity analysis. It strikes me that the people on the JSR-308 list approach this rather informally, by just guessing what might be ambiguous or not. It should be much easier to play with a language and determine how to introduce a new feature in a non-ambiguous way. <a href="http://www.i3s.unice.fr/~schmitz/">Sylvian Schmitz</a> will be presenting a Bison-based ambiguity detection tool at LDTA, so that should be interesting to learn about. The paper is already online, but I haven't read it yet. Maybe I'll report about it later.
</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6943366.post-46231103817145763972007-01-25T01:12:00.000-08:002007-04-03T07:01:26.458-07:00Grammar Engineering, I'm loving it<p>
I think that the most attractive thing to work on as a researcher are the problems that you actually encounter yourself. Obviously, it is an option to try to encounter these problems by studying or writing code that you are not actually directly interested in, but it is much more fun if you can work on issues in code that you just write out of your own interest.
</p>
<p>
This is what we've done in our latest paper, titled "Grammar Engineering Support for Precedence Rule Recovery and Compatibility Checking". Most of our work involves syntax definitions, for example to provide general support for the implementation of program transformations and also for our research on embedding and composing languages (for various applications). One of the problems we encounter is that the conversion of a grammar from one grammar formalism to another is rather unreliable. For example, if you need to convert a grammar from YACC to SDF, then you basically have no idea if the two grammars are compatible. For programs in general, this is understandable, since imperative source code are very difficult to compare. But, if you have a more or less declarative specification of a grammar, how is it possible that you cannot compare them at all?
</p>
<p>
As a first step towards supporting grammar compatibility checking, we have implemented a tool that compares the precedence rules of grammars. A very simple example of a precedence rule is that <code>1 + 2 * 3</code> should be parsed as <code>1 + (2 * 3)</code>. The precedence rules of a grammar might look like a trivial property at first sight, but actually it is rather complex to understand as a human what the precedence rules of a YACC or SDF grammar are. This tool has already been most successful for comparing existing C grammars written in YACC and SDF and deriving the exact precedence rules of PHP, which has quite a bizarre expression language.
</p>
<p>
The paper has been accepted for <a href="http://www.di.uminho.pt/ldta07/">LDTA 2007</a>, the Workshop on Language Descriptions, Tools and Applications, which is an excellent place for this subject. We will present our work at this workshop at the end of March. A draft version of the paper is available from the <a href="http://martin.bravenboer.name/publications.html">publication list</a> at my homepage. The implementation is available as part of the <a href="http://www.stratego-language.org/Stratego/GrammarEngineeringTools">Stratego/XT Grammar Engineering Tools</a>. The website includes a bunch of examples. In the future, we hope to provide more tools to assist with the maintenance, testing, conversion, and analysis of grammars. In fact, Stratego/XT itself already contains some interesting tools for this, most prominently the grammar unit-testing tool <code>parse-unit</code>.
</p>
<p>
Now I think of it, it's probably a bad idea as a vegetarian to paraphrase a campaign by McDonald's ...
</p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-6943366.post-1154363882714077902006-07-31T09:20:00.000-07:002006-07-31T09:40:54.660-07:00Java-front versus Jackpot, APT, Eclipse JDT<p>
Recently, there have been a few interesting developments in standard support for open compilers and program transformation. For example, Sun released the annotation processing tools (APT) as part of the JDK5, which opens up Sun's Java compiler a bit. Also, there is <a href="http://jackpot.netbeans.org/">Jackpot</a>, a plugin for Netbeans for transformating Java code. The obvious question is how this relates to work that has been done in research on open compilers and program transformation.
</p>
<p>
Olivier Lefevre sent me an email to ask how the tree for Java provided by Jackpot and javac compares to our support for parsing and transforming Java in <a href="http://www.stratego-language.org/Stratego/JavaFront">Java-front</a>. The answer is probably useful in general, so I'll quote it here. Feel free to share your opinion in the comments!
</p>
<blockquote>
As you may know, starting with Java 6 the Sun JDK will ship with an API to the AST: see <a href="http://jackpot.netbeans.org/docs/org-netbeans-libs-javacapi/overview-summary.html">jackpot api</a>
</blockquote>
<p>
Yes, Jackpot and APT are great projects. However, there is not yet a full API to the AST in the standard JDK, afaik. The compiler will be more 'open' in two different ways.
</p>
<p>
First, the current annotation processing tool (APT) is going to be combined with javac, but APT only provides access to the global structure of a Java source file and does not include the statement and expression level. Also, this API does not allow modification of the Java representation. APT is read-only: you can only generate new code.
</p>
<p>
Second, there is Jackpot, which is a rule-based language for transforming Java code. For Jackpot, the representation of Java used by javac has been opened and cleaned up a bit to make it more usable in external tools. However, this representation is not standardized and Sun recommends not to use stuff from com.sun.*. Afaik, Jackpot will be shipped as part of NetBeans and not as part of the JDK.
</p>
<blockquote>How does this compare to Java-front?</blockquote>
<p>
That's a good question. The answer depends on the application.
</p>
<p>
If you just need an AST for Java, then the advantage of the com.sun.source.tree AST is that you are absolutely sure that the AST conforms to javac, since the implementation is exactly the same. Of course, the same holds for ecj and the AST of Java that is provided by Eclipse (org.eclipse.jdt.core.dom.*). However, the grammar provides by Java-front is very good, so I don't expect any parsing problems. It has been tested and used heavily in the last few years and the development of this grammar has even resulted in a number of fixes in the JLS.
</p>
<p>
An advantage of Java-front is that it is a bit more language independent. Obviously, the Eclipse and Javac ASTs are to be used in Java. If you want to implement a transformation of Java in a different language, then you have to write an exporter. Java-front outputs ASTs in a language independent exchange format (ATerms), which can also be converted to XML. Of course, Java-front is most useful if you combine it with a language that is designed for program transformation and operates on ATerms, such as Stratego. One of the biggest advantages of Stratego is that it is very easy to do traversals over the AST: no tiresome visitors.
</p>
<p>
If you need more information about Java than can be defined in a context-free grammar, then you need more than just a parser. For more complex transformations (which includes simple refactorings), you'll probably need an implementation of disambiguation (reclassification) and qualification of names. A simple statement like System.out.println is already highly ambiguous with an analysis: is System a variable? a class? a package? Is out an inner class? a field? Most likely, you'll need type information as well. Java and Eclipse have the major advantage that you can safely assume that their type checkers are pretty good. For Jackpot, I suppose that there is some way to get type information (since type information can be used in Jackpot), but I from a quick scan I cannot figure out how to do this from the public
API. For Java-front, there is an extension (<a href="http://www.stratego-language.org/Stratego/TheDryad">Dryad</a>) that supports type-checking and disambiguation, but this work is not yet complete. Using an existing compiler is of course a safer alternative. For experiments, the stuff provided by Dryad should be ok (we use it in our course on program transformation).
</p>
<p>
A different application is the implementation of Java language extensions. Javac and ECJ do not support this. The Java representation is open, but not extensible. Java-front uses a modular syntax definition formalism (SDF) that allows you to extend the grammar of Java in an almost trivial way. The strength of this approach is illustrated by the embedding of the Java syntax in Stratego (<a href="http://www.stratego-language.org/Stratego/MetaProgrammingWithConcreteObjectSyntax">GPCE '02</a>) and Java (<a href="http://www.cs.uu.nl/wiki/Visser/GeneralizedTypeBasedDisambiguationOfMetaProgramsWithConcreteObjectSyntax">GPCE '05</a>), the applications of the grammar in <a href="http://www.stratego-language.org/Stratego/ConcreteSyntaxForObjects">MetaBorg</a> (OOPSLA '04), and the modular extension of the grammar for the definition of the <a href="http://www.stratego-language.org/Stratego/AspectJFront">AspectJ syntax</a> (OOPSLA '06). Of course, these applications are not really interesting if you are just interested in a Java program transformation tool, but it illustrates the reusability of such a syntax definition (as opposed to the grammars used by ecj, javac and most other parser generators). You'll need tools for pretty-printing as well. Outside of Eclipse, pretty-printing the JDT Core DOM is troublesome and mostly useful for debugging the output only. Inside Eclipse, the support for pretty-printing and preserving the layout of a program is of course excellent (see the existing implementations of refactoring). Jackpot provides a pretty-printer as well, but I don't know if it can be used outside NetBeans. Java-front provides the tool pp-java, which has been heavily tested and can insert parentheses in exactly the right places.
</p>
<blockquote>I am interesting in implementing small refactorings.</blockquote>
<p>
If you want to implement solid refactorings that could eventually even be deployed, then I would suggest to use an existing framework for refactoring, since there is much more to do than just getting an AST. A few years ago, I implemented an extract method refactoring in JRefactory, which was quite a useful experience. I suppose it's a bit obsolete now, since the refactoring market is dominated by refactorings directly supported by IDEs. You could consider Eclipse or NetBeans.
</p>
<p>
If your objective is to play a bit with program transformations and maybe even be a bit more adventurous by using real program transformation languages, then it might be nice to use Java-front and Stratego. Using Stratego is a major advantage of the tiresome implementation of traversals in Java (and most other languages).
</p>
<p>
Hope this helps :) Feel free to ask more questions if anything is unclear :)
</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6943366.post-1145400649379401242006-04-18T15:49:00.000-07:002006-04-19T03:32:53.130-07:00On the Details of Protected Access in Java<p>
When I was working on my master thesis I spent most of my time in
the Software Technology Lab of our department. This was really a
gorgeous place to work, mostly due to a group of great fellow
students. After finishing our master study, we all left this lab a
few years ago and have a job now. One of the students I worked
with first went to a research institute to work on maritime
simulations and is now moving to a company that produces Real
Software.
</p>
<p>
Of course, the problem is that universities do a terrible job at
learning students how to write Real Software, so the first thing
these companies do is give their employees some proper
education. Real Software is written in Java, so the first step is
to become a <a href="http://www.sun.com/training/certification/java/java_progj2se.html">Sun
Certified Programmer for the Java 2 Platform</a>, aka SCJP.
</p>
<p>
This guy is now going through the SCJP materials, which covers
Java at a surprising level of detail. Universities do at least one
thing right: they stimulate you to think about the things you
learn, instead of just accepting it all as-is. As a result, we
have had some nice discussions about the details of Java, and he
has only just started ;) .
</p>
<p>
Last week, we started talking about the details of protected
access in Java. The study material seems to mention the detailed
rules, but does not explain why the rules are the way they
are. Strange enough, I could not find any good resource that does
this, and I couldn't remember where I learned about this, so I
decided to explain it myself.
</p>
<h2>Protected Access</h2>
<p>
The Java Language Specification has two subsections that define
the rules for accessibility. The general rules are in subsection
<a href="http://java.sun.com/docs/books/jls/second_edition/html/names.doc.html#102765">6.6.1:
Determining Accessibility</a> and some of the more complicated
rules for protected access are in subsection <a href="http://java.sun.com/docs/books/jls/second_edition/html/names.doc.html#62587">6.6.2:
Details on Protected Access</a>. The rules of the first subsection
are rather straightforward and don't need much explanation.
</p>
<p>
For protected members and constructors this subsection defines
that the members are accessible if the access occurs from within
the same package. This rule is clear, although many people seem to
be surprised by this. However, the second case refers to
subsection 6.6.2, where most of the confusion is about. This
subsection defines the additional accessibility rules for
protected access, to which most people are informally familiar as
“a protected member is accessible from subclasses”. A
simple example where this accessibility rule is applied:
</p>
<blockquote>
<pre>
package a;
public class A {
protected int secret;
}
</pre>
</blockquote>
<blockquote>
<pre>
package b;
public class B extends a.A {
void f() {
secret = 5;
}
}
</pre>
</blockquote>
<p>
However, the rules that define accessibility of protected members
from subclasses are a bit more complex than you might expect. The
problem is that the rule you know informally, is not that clear
anymore if the access of the protected member is qualified
(i.e. is applied to an object, not implicitly to
<code>this</code>). Consider this simple example:
</p>
<blockquote>
<pre>
package a;
public class A {
protected int secret;
}
</pre>
</blockquote>
<blockquote>
<pre>
package b;
public class B2 extends a.A {
void f(a.A a) {
a.secret = 5;
}
}
</pre>
</blockquote>
<p>
In this example, the access to the protected instance field secret of A
occurs from a subclass B of A, so according to our informal idea
of protected access, this should be allowed. However, this example
should make you feel a bit uncomfortable. Indeed, this is not
allowed in Java. Let's take a look at what the compilers say:
</p>
<blockquote>
<pre>
$ javac b/B2.java
b/B2.java:5: secret has protected access in a.A
a.secret = 5;
</pre>
</blockquote>
<blockquote>
<pre>
$ jikes b/B2.java
Found 1 semantic error compiling "b/B2.java":
5. a.secret = 5;
^----^
</pre>
<code>
*** Semantic Error: The instance field "secret" in class "A" has
protected access, but the qualifying expression is not of type "B2" or
any of its enclosing types.
</code>
</blockquote>
<blockquote>
<pre>
$ ecj b/B2.java
----------
1. ERROR in b/B2.java
(at line 5)
a.secret = 5;
^^^^^^^^
The field A.secret is not visible
----------
</pre>
</blockquote>
<blockquote>
<p style="font-style: italic; font-size: small;">
Side note: I was a bit surprised by the error report of
ecj. This could be a bug: the protected field <em>is</em>
visible but it is not <em>accessible</em>. The error report of
jikes is by far the best.
</p>
</blockquote>
<p>
Basically, if this access would be allowed, then you can access
any protected field of any class, by just making a subclass of the
class that declares the protected field. Hence, you could never
<em>really</em> protect your protected fields if this kind of
access would be allowed. Consider this example:
</p>
<blockquote>
<pre>
package a;
public class A {
protected int secret;
}
</pre>
</blockquote>
<blockquote>
<pre>
package b;
public final class MySecurityHazard extends a.A {
}
</pre>
</blockquote>
<blockquote>
<pre>
package c;
public class C extends a.A {
void f(b.MySecurityHazard b) {
b.secret = 5;
}
}
</pre>
</blockquote>
<p>
In this example the class MySecurityHazard has deliberately been
declared to be <code>final</code> to avoid that the sensitive
fields of this class can be accessed. However, according to our
(now deprecated) informal knowledge of protected access, we can
just create another subclass <code>C</code> of <code>A</code> that
can be used to access the protected fields of
<code>MySecurityHazard</code>.
</p>
<p>
How can we define the protected access that we would like to have?
Of course, qualified access to protected members could be
forbidden completely (and maybe that would have been a good idea),
but Java is a bit more flexible, without introducing security
problems. The basic problem of the unwanted access is that you
start a new inheritance branch and access protected fields from
there. So, the qualified access to protected fields should be
restricted to the same inheritance branch as the object to which
it is applied. This is exactly what the details of protected
access are about. They define that access from a class
<code>S</code> is permitted only if the type of the qualifier is
<code>S</code> or a subclass of <code>S</code>. Let me now finally
quote the specification:
</p>
<blockquote>
<p>
Let <i>C</i> be the class in which a <code>protected</code>
member m is declared. Access is permitted only within the body
of a subclass <i>S</i> of <i>C</i>. In addition, if <i>Id</i>
denotes an instance field or instance method, then:
</p>
<ul>
<li>
If the access is by a qualified name
<i>Q</i><code>.</code><i>Id</i>, where <i>Q</i> is an
<em>ExpressionName</em>, then the access is permitted if and
only if the type of the expression <i>Q</i> is <i>S</i> or a
subclass of <i>S</i>.
</li>
<li>
If the access is by a field access expression
<i>E</i><code>.</code><i>Id</i>, where <i>E</i> is a
<em>Primary</em> expression, or by a method invocation
expression
<i>E</i><code>.</code><i>Id</i><code>(</code>. . .<code>)</code>,
where <i>E</i> is a <em>Primary</em> expression, then the
access is permitted if and only if the type of <i>E</i> is
<i>S</i> or a subclass of <i>S</i>.
</li>
</ul>
<p style="text-align: right; font-size: small; margin-bottom: 0pt;">
See <a href="http://java.sun.com/docs/books/jls/second_edition/html/names.doc.html#62587">JLS3,
Section 6.6.2.1: Access to a protected Member</a>
</p>
</blockquote>
<p>
Note that these more specific rules only apply to instance
members, not to static ones. If you are using a protected static
field or method, make sure that you understand what you're doing:
anyone will be able to access the field or method by just creating
another subclass of the class that declares the protected field or
method. It would not make any sense to impose these additional
rules on static members, since all subclasses actually share the
static member! So, while protected access on instance members
could be used for security reasons, protected access on static
members is only useful for hiding the member by restricting its
accessibility to a part of a program where it is relevant.
</p>
<p>
If you read the rules carefully, then they are not too
unclear. The real source of confusion seems to be that there is no
motivation why these more complex rules are necessary. I hope this
blog has solved that.
</p>Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-6943366.post-1123019566668439562005-08-02T14:51:00.000-07:002005-08-02T14:52:46.676-07:00Lifting Member Classes from Generic Classes<p>
I've been working of Java generics and member classes during the
last few weeks. In particular, I had to find out how the additional
information on Java generics is exactly represented in bytecode
attributes of generic classes and methods (aka generic
signatures). I was surprised by the way member classes of generic
classes are compiled and I'm worried about the consequences of this
for future updates of the JVM specification. That's what this entry
is about.
</p>
<p>
First, something about the relation between member classes and
lambda lifting. The Java language supports member classes, but Java
bytecode does not. Therefore, Java compilers have to lift member
classes to top-level classes, a transformation that is comparable to
lambda lifting (see for example the paper <a href="http://danae.uni-muenster.de/lehre/kuchen/JFLP/articles/2004/A2004-01/JFLP-A2004-01.pdf">Lambda-Lifting in Quadratic Time</a>).
</p>
<p>
Member classes are compiled to ordinary top-level classes where the
constructor takes an extra argument for the instance of the
enclosing class. For example, the constructor a member class Bar of
class Foo will get an additional argument of type Foo for the
enclosing instance of a Bar object. Constructors of local classes
(classes declared in a method) also take additional arguments for
the local variables that it uses from its enclosing method. This
process of lifting classes (that have lexical scope) is very similar
to lifting nested functions in lambda lifting: all local variables
that are used in the nested class become explicit arguments to make
the nested class <em>scope insensitive</em>. After that, the class
can simply be lifted out of its original scope to the top-level. An
essential property of the class (or function) after lifting is that
the nested class (or function) no longer directly refers to
variables of the original scope of the nested class or function.
</p>
<p>
In Java 5.0, parameterized types and methods (aka generics) have
been introduced. In combination with member classes, this raises the
question how <em>type</em> variables should be handled when lifting
member classes. From the source code point of view, this is pretty
obvious:
</p>
<pre>
class Foo<A> {
class Bar {
A get() { ... }
}
}
</pre>
<p>
If the class Bar is lifted, then its constructor gets an additional
parameter for the enclosing Foo instance. This Foo instance is
parameterized using a type variable A, so the lifted class Bar
should also be parameterized with a type: the type parameter of its
enclosing instance. This lifting of type parameters is comparable to
the lifting of parameters for normal variables. So, the result of
source-level lifting the Bar class could be:
</p>
<pre>
class Foo<A> {
}
class Bar<A> {
private final Foo<A> _enclosing;
public Bar(Foo<A> enclosing) {
_enclosing = enclosing;
}
A get() { ... }
}
</pre>
<p>
Indeed, the Eclipse implementation of the refactoring <em>"Move
Member Type to New File"</em> adds the type parameter to the lifted
class (thumbs up for the generics support in Eclipse!).
</p>
<p>
So, what happens in Java bytecode? Should the lifted class have a
type parameter? Should the lifted class be a valid class,
generically speaking? (of course, a JVM is currently not required to
understand generics related information in bytecode).
</p>
<p>
Well, the lifted class does not have type parameter and, generically
speaking, it is not a valid class. Let's take a look at the
bytecode, represented in a structured way as an aterm, produced by a
tool called class2aterm (I use ... to leave out some details and //
to explain what the code means. The full aterm is available <a href="http://www.cs.uu.nl/people/martin/FooBar.aterm">here</a>)
</p>
<pre>
$ class2aterm -i Foo\$Bar.class --parse-sig | pp-aterm
ClassFile(
...
// field for the enclosing Foo instance.
Field(
AccessFlags([Final, Synthetic])
, Name("this$0")
, FieldDescriptor(ObjectType("Foo"))
, Attributes([])
)
...
// constructor, taking a Foo argument.
Method(
AccessFlags([])
, Name("<init>")
, MethodDescriptor([ObjectType("Foo")], Void)
, Attributes([])
)
...
// get method with a generic signature
Method(
AccessFlags([])
, Name("get")
, MethodDescriptor([], ObjectType("java.lang.Object"))
, Attributes(
[ MethodSignature(
TypeParams([])
, Params([])
, Returns(TypeVar(Id("A")))
, Throws([])
)
]
)
)
...
// attributes of the class Bar
Attributes(
[ SourceFile("Foo.java")
, InnerClasses( ... )
]
)
...
)
</pre>
<p>
This disassembled class file reveals some interesting details about
the way nested classes are lifted:
</p>
<ul>
<li>
The lifted class Bar is not parameterized: it has no
ClassSignature attributed, which should be there if the class
takes formal type parameters.
</li>
<li>
The field for the enclosing class does not have a parameterized
type. Its type is the <em>raw</em> type Foo!
</li>
<li>
The constructor of Bar (the method name <init>) has no generic
signature and takes a raw type Foo as an argument.
</li>
<li>
The get method <em>does</em> have a generic signature, which
describes that the method returns a type variable A.
</li>
</ul>
<p>
Of course, all the information of the original source can be
reconstructed by a tool that knows about member classes <em>and</em>
generics. But, to a tool that only knows about generics, this code
would be considered incorrect. Hence, if the virtual machine would
support generics in the future (which is an option explicitly left
open), then this code would be incorrect! The type variable
mentioned in the generic signature of the get method is <em>not in
scope</em>. Hence, the JVM would be required to have knowledge of
inner classes as well as generics to be able to find out what type
parameter this type variable refers to. Unless, of course, the
bytecode format is changed, which will still make it impossible to
run code compiled to the current bytecode format under the new JVM,
which has always been a important requirement for Sun when working
on extensions of the Java platform (language and virtual machine).
</p>
<p>
Furthermore, the type variable in the signature of the get method is
not qualified. Every single name in Java bytecode is fully
qualified, which is very useful for tools that need to work on
bytecode: they don't have to name analysis to find out to what
construct a name refers. Type variables are not qualified, which
complicates the analysis that has to be performed by a tool that
operates on bytecode. Not only can this type variable refer to type
parameters of arbitrary enclosing classes, it could also refer to
type parameters of enclosing generic methods (for local classes or
member classes in local classes).
</p>
<p>
The fact that type variables in bytecode are not qualified is
already quite annoying without considering member classes. In the
Java language, it is allowed to redeclare type variables. For
example:
</p>
<pre>
class Foo<A> {
<A> void foo(A x) {
}
}
</pre>
<p>
In this example the type parameter A of the foo method is a
different type parameter then the A parameter of the class Foo. This
basically means that a bytecode processing tool with knowledge of
generics has to do name analysis, which is definitely not something
that is desirable for a bytecode format. Introducing canonical,
fully qualified names for type variables would solve this.
</p>
<p>
As you might know, I'm working on semantic analysis for Java in the
context of the <a href="http://www.strategoxt.org">Stratego/XT</a>
project. My goal is to make it possible to define program
transformations in Stratego at the semantic level: in program
transformations consider the actual meaning of names, types of
expressions, and so on, without requiring the programmer to redo the
semantic analysis, which is quite complex for a 'real' language like
Java. Obviously, I have decided to qualify type variables. For
example, the parameter A of the method foo in class Foo in the last
example is represented as:
</p>
<pre>
Param(
[]
, TypeVar(
MethodName(TypeName(PackageName([]), Id("Foo")), Id("foo"))
, Id("A")
)
, Id("x")
)
</pre>
<p>
The MethodName is the qualifier of the type variable in this
example. This qualifier makes it immediately clear that the type
variable refers to the type parameter of the method foo.
</p>
<p>
I don't know if this would have been fixed (maybe I see this
completely wrong), but still it's a pity that I wasn't able to give
feedback on this before JSR14 was finished. At that time, I was
still working on the syntactic part of my Java transformation
project (which is now available as <a href="http://www.stratego-languahe.org/Stratego/JavaFront">Java
Front</a>). I gave some feedback on the syntax of generics,
annotations and enumerations (mostly typos and minor bugs), but
that's about it. For reducing the number of possible problems, I
think that it would be very useful if new language features, such as
generics, are also implemented with alternative techniques and
tools. For example, I was able to give some feedback on the syntax
of Java, because I was implementing a parser by creating a
declarative syntax definition in <a href="http://www.syntax-definition.org">SDF</a>, a modular syntax
definition formalism that integrates lexical and context-free
syntax. These unconventional approaches might in general result in
valuable feedback on proposals for new language features.
</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6943366.post-1117803776249899442005-06-03T05:57:00.000-07:002005-06-03T06:02:56.256-07:00Understanding a Problem<p>
This week I've been reviewing the solutions of assignments
submitted by student of our <a href="http://www.cs.uu.nl/wiki/Pt">program transformation course</a>. One of the things that strikes me again and again is
how hard it is for students to get a grasp of the problem that
they have to solve in an assignment.
</p>
<p>
In the last few installments of our program transformation
course, the students had to develop a program instrumentation
that traces the number of calls for every callee/caller pair for
one of the assignments (we usually have about 10
assignments). Last year, we just described the problem in the <a href="http://www.cs.uu.nl/wiki/Pt03/AssignmentScopedDynamicRewriteRules">assignment</a>. The solutions of the students were ok, but not really exciting. They forgot to handle all kinds of cases, and some solutions even didn't terminate for some input programs.
</p>
<p>
This year, I included a set of tests in the <a href="http://www.cs.uu.nl/wiki/Pt04/AssignmentConcreteObjectSyntax">assignment</a>, which illustrate most (but not all) of the problems in this
program transformation. Surprise, surprise: the students
suddenly were able to handle all the issues illustrated by the
testsuite that I provided. However, obviously I did not give the students all
tests (evil grin). Indeed, several solutions could not handle the tests that
I did not provide.
</p>
<p>
Most students don't write test. Worse, if they do test, then
they create a single file and <em>modify</em> the test to check
a new situation that they might have discovered. In this way
they don't build up a nice testsuite. The important part of
testing is that they can be repeated automatically, not that you
run a test once! I'm trying very hard to convince students to
test their code properly, but they don't seem to understand the
need for it, so you typically get questions like <em>"Is 10
tests enough?"</em>. I'm afraid that this is not a problem
specific to students.
</p>
<p>
This might not be very surprising to you, but I <em>am</em>
surprised how clear the results of this small 'experiment' are
(I did not do this experiment on purpose). I wonder what the
implications of this should be for education. Clearly, having a
grasp of a problem is the most important part of the solution.
</p>Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-6943366.post-1114361162553955032005-04-24T09:42:00.000-07:002005-04-24T09:51:09.013-07:00Generics: The Importance of Wildcards<p>
or <i>"Why type erasure is not such a bad thing"</i> or <i>"Why
generics in C# are not that good"</i>
</p>
<p>
Last Friday, I read an article that has been on my to do list
way too long: <a href="http://bracha.org/wildcards.pdf">Adding Wildcards to the Java Programming Language</a>. I've seen
wildcards in Java; I've used wildcards in Java; and I've even
read the <a href="http://java.sun.com/docs/books/jls/">Java Language Specification</a> on wildcards, but did yet not get the
essence of wildcards.
</p>
<p>
This paper makes the need for wildcards very clear <em>and</em>
explains why the work on wildcards and parameterized types is
novel. Unfortunately, generics are often ridiculed by functional
programmers. They claim that their type systems have been more
expressive for decades. Fortunately, this paper clearly explains
the issues of introducing generics in an object oriented
setting.
</p>
<p>
The problem with basic parameterized types is subtyping. For
example, although <code>Integer</code> is a subclass of
<code>Number</code>, a <code>List<Integer></code> is not a
subtype of a <code>List<Number></code>. Hence, if a method
requires a <code>List<Number></code> as an argument, then you
cannot pass a list of <code>List<Integer></code> to it.
</p>
<p>
Why is a <code>List<Integer></code> not a subclass of
<code>List<Number></code>? Well, this is related to
covariance and contravariance. A type declaration is covariant
if it allowed to be more specific. For example, return types are
covariant. A method that is declared to return a
<code>Number</code> can be overridden to be <em>more
specific</em> and return an <code>Integer</code>. On the other
hand, parameter types are <em>contravariant</em>: a method that
is declared to accept a <code>Integer</code> argument can be
implemented in a <em>more general</em> way by allowing all
<code>Number</code>s.
</p>
<p>
The problem with type parameters is that they are used in method
parameters as well as return types. Hence, they are restricted
to the intersection of covariance and contravariance:
invariance. Thus, type parameters are invariant and a
<code>List<Integer></code> is not a subclass of
<code>List<Number></code>.
</p>
<p>
If Java would be restricted to basic parameterized types and
methods, then it is quite difficult to come up with a good
signature for a method that works on a <code>List</code> that
contains any <code>Number</code>. In fact, you cannot even
declare the type of lists with abitrary numbers! Allowing
arbitrary numbers a generic method with a dummy type parameter
for the 'real' Number, i.e.
</p>
<pre>
<T> void doSomething(List<T extends Number>) { ... }</pre>
<p>
This works, but it gets quite mind-boggling if the types get
more complex (<i>"The more interesting your types get, the less
fun it is to write them down!"</i> -- <a href="http://www.cis.upenn.edu/~bcpierce/papers/tng-lics2003-slides.pdf">Benjamin C. Pierce</a>).
These types are not only hard to write down: it
does not even work in all cases. For example, you cannot declare
a field that contains arbitrary <code>Number</code>s, since you
cannot introduce a dummy type variable for a field.
</p>
<p>
Wildcards are a language feature that make it a bit more fun to
write down these types. Actually, it is a language feature that
is <em>necessary</em> to write down these types, since the dummy
type variable is just a workaround and uses the type of the
method to declare the type of the argument. The field example
shows that you cannot write down the type itself. Wildcards are
based on the notion of <em>use-site variance</em>. Using
wildcards, you can declare that your list is covariant:
<code>List<? extends Number></code> or contravariant:
<code>List<? super Number></code>. For more details, read the
paper!
</p>
<p>
Unfortunately, C# will not support wildcards or a similar
mechanism. The implementation strategy does not allow the
introduction of wildcards (generics are implemented in the
runtime instead of by type erasure). This is a bit surprising,
since the implementation strategy is often claimed to be
superior. What disappoints me is that the designers of C# are
not willing to admit that subtyping is an issue and that
wildcards are a solution. See the weblog of Eric Gunnerson: <a href="http://blogs.msdn.com/ericgu/archive/2004/09/23/233438.aspx">Puzzling through Erasure II</a> and the section on wildcards in <a href="http://blogs.msdn.com/ericgu/archive/2004/06/29/168808.aspx">JavaOne: Day One</a>.
</p>Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-6943366.post-1113948753025316772005-04-19T15:10:00.000-07:002006-01-17T11:27:25.453-08:00Java Surprise 3: The Return of the Class<p>
First of all: good news! The final version of <a href="http://java.sun.com/docs/books/jls/">The Java Language Specification, Third Edition</a> is now available online! The
specification has been improved considerably since the latest
draft. <a href="http://bracha.org">Gilad Bracha</a> seems to be
responsible for the bulk of the work, which is a tough job. I think
that the result is pretty good, although I'm afraid that I will keep
bothering him with comments and requests for clarification ;).
</p>
<p>
Now back to the issue of this post. First of all: I have no idea how
well-known the issue in this post is. I didn't know it, but it might
actually be quite well-known. I have some references to previous
discussions on this issue at the end of the post.
</p>
<p>
First, I want to say something about what influences the return type
of a method in Java. Before Java 1.5, the return type of a method
was just the plain return type specified in the method
declaration. In other words, the return type did not depend on
anything.
</p>
<p>
Java 1.5 introduces parameterized types and generic methods. The
return type of a method can now also include type variables. This
makes the return type dependent on the values of the type variables
that occur in it. The type variables can have two different scopes:
the class of the method or just the method itself, which makes it a
generic method. So, the actual return type of a method now also
depends on the value of these type variables.
</p>
<p>
However, there is <em>one</em> method in the Java library that does
not return what it declares to return and needs another
dependency. Indeed, there is an additional factor that influences
the return type of this method.
</p>
<p>
The method I'm talking about is <code>Object.getClass()</code>,
which returns the class of an object. In Java 1.5,
<code>Class</code> itself is parameterized with the type that it
represents. For example, the <code>Class</code> for
<code>String</code> is <code>Class<String></code>. The question
is: what should the type parameter of the <code>Class</code>
returned by <code>Object.getClass()</code> be? Well, at the
declaration of the method we basically know nothing, and that is
indeed the declared return type: a wildcard (unknown type) with a
very general bounds: the type must extend <code>Object</code>.
</p>
<pre>
public final Class<? extends Object> getClass()
</pre>
<p>
However, let's take a look at a piece of code where
<code>getClass</code> is invoked. Assuming that
<code>getClass</code> returns what it claims to return, we cannot
declare <code>c</code> to be of a more specific type, for example
<code>Class<List></code>. We must declare it with a very general
value for the type parameter: a wildcard.
</p>
<pre>
List<String> list = ...;
Class<?> c = list.getClass();
</pre>
<p>
This is unfortunate, since we actually know more about the type
parameter of <code>Class</code>. We know <em>at the invocation
site</em> that it is a <code>List</code>, but of course we cannot
declare that in the return type of <code>getClass</code> in this
way! So, we would like to let the return type of
<code>getClass</code> dependent on the static type of the
<code>Object</code> on which the method is invoked. In this way, the
variable c could be declared to be of type
<code>Class<List></code>.
</p>
<p>
The developers of the Java specification decided to make the return
type of this method a special case. That is, the Java Language
Specification defines that an invocation of the
<code>getClass</code> method must be treated in special way. In
other words, the return type of the method is different from the one
declared on the source code. The bounds of the <code>Class</code>
returned by <code>Object.getClass()</code> is changed by the
specification to the static type of the expression on which the
method <code>getClass</code> is invoked. This is a useful feature,
but it is a pity that this return type cannot be declared!
</p>
<p>
This post is getting <em>way</em> too long, but I would like to
relate this to the implicit <code>this</code> argument of methods in
object-oriented languages. For ordinary method arguments, you can
declare types, which might include type variables. These type
variables can influence the return type of the method. This is more
or less what we want, but now we need this for our implicit
<code>this</code> argument. I'm not sure if a solution in this
direction is more attractive, but there is some link ... Are there
more methods whose return type we would like to dependent on the
static type of the object at the invocation site? If so, then this
should not be supported by the language itself. Unfortunately, I
cannot think of an example at the moment ;) .
</p>
<p>
There is even more to tell about this <code>getClass</code> method,
since the type parameter of the <code>Class</code> is not the static
type of the subject expression, but the erased variant of it. Maybe
I'll make that a future post ...
</p>
<p>
Some references to related discussions:
</p>
<ul>
<li>
Bug report in the Sun bug database about using the erased type as
the parameter of the resulting Class:
<a href="http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=5004321">Object.getClass() should return erased class type</a>
</li>
<li>
Discussion in the Java Generics forum on the same issue:
<a href="http://forum.java.sun.com/thread.jspa?threadID=496028&start=0&tstart=0">Are there bugs in the generics tutorial?</a>
</li>
<li>
Bug report for the Eclipse JDT subproject:
<a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=58666">Object.getClass() need to be treated special ?</a>
</li>
<li>
Another Generics FAQ:
<a href="http://www.angelikalanger.com/GenericsFAQ/FAQSections/TechnicalDetails.html#Is%20the%20capture%20of%20a%20bounded%20wildcard%20compatible%20to%20the%20bound?">Is the capture of a bounded wildcard compatible to the bound?</a>
</li>
</ul>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-6943366.post-1112811264332066142005-04-06T11:13:00.000-07:002005-04-06T11:17:16.330-07:00Java Surprise 2: Motivation<p>
In the previous posts I showed that the priority of a cast to a
reference type is different from the cast to a primitive type. <a href="http://www.cs.vu.nl/~mvermaat/">Martijn Vermaat</a> asked me why
the designers of the Java language made this decision. Of course, they
have good reasons for design decision, but still the decision is
questionable, especially now we have autoboxing.
</p>
<p>
Let's take a look at this example from the original post:
</p>
<pre>
$ echo "(Integer) - 2" | parse-java -s Expr | aterm2xml --implicit
<Minus>
<ExprName><Id>Integer</Id></ExprName>
<Lit><Deci>2</Deci></Lit>
</Minus>
</pre>
<p>
If no priorities where defined in the Java language, then this
expression would be ambiguous. I can illustrate this by parsing the
same expression using a Java grammar that does not declare
priorities. I'm using the <a href="http://www.syntax-definition.org/SdfSoftware">SGLR</a> parser for this,
which is capable of producing a parse forest (multiple parse trees) if
an input is ambiguous. The alternatives are represented by an
<code>amb</code> element with 2 or more children.
</p>
<pre>
$ "(Integer) - 2" | sglri -p JavaAmb.tbl | aterm2xml --implicit
<amb>
<Minus>
<ExprName>
<Id>Integer</Id>
</ExprName>
<Lit>
<Deci>2</Deci>
</Lit>
</Minus>
<CastRef>
<ClassOrInterfaceType>
<TypeName>
<Id>Integer</Id>
</TypeName>
</ClassOrInterfaceType>
<Minus>
<Lit>
<Deci>2</Deci>
</Lit>
</Minus>
</CastRef>
</amb>
</pre>
<p>
This clearly shows that the input is ambiguous: the first alternative
is the binary operator (which is the alternative chosen by the Java
language) and the other alternative is a cast to a reference
type.
</p>
<p>
However, the cast to an <code>int</code> is <em>not</em> ambiguous,
since <code>int</code> is a reserved keyword, thus forbidden as an
identifier. So, for this input there is only a single parse option,
even in the ambiguous version of Java.
</p>
<pre>
$ echo "(int) - 2" | sglri -p JavaAmb.tbl | aterm2xml --implicit
<CastPrim>
<Int/>
<Minus>
<Lit><Deci>2</Deci></Lit>
</Minus>
</CastPrim>
</pre>
<p>
The ambiguity in the first example has to be resolved. So, what should
the language designer do? Prefer the cast, or prefer the binary minus?
Well, that decision is not very hard: in the first example, the
<code>(Integer)</code> is a parenthesized expression, where the
expression is the variable <code>Integer</code>. If we ignore this
actual value (since it is quite distracting), then the structure of
the expression is <code>( Expression ) - Expression</code>. You will
recognize the need for this pattern, since the expression <code>(a * b) - c</code> has exactly the same structure!
</p>
<p>
The cast to a primitive type does not have the ambiguity problem,
since all primitives types are keywords and all keywords are forbidden
as identifiers. So, there is no reason to disallow this a primitive
cast at this location and for this reason the language designers
changed the priority of the primitive cast.
</p>
<p>
Are there alternatives? Yes, there are, but they are not very
attractive either. First, a parenthesized expression name could be
forbidden. Using parentheses for a plain identifier (or a qualified
name) does not make a lot of sense. Another option is disallow casts
to primitive types at this location. This can be annoying, but it
makes things more clear and consistent.
</p>
<p>
Of course, having two different production rules for casts is not
attractive. It's just a single language construct, so it should be
defined by a single production as well. I wonder what the language
designers would have done if autoboxing was already included in the
first version of Java, since autoboxing makes this distinction between
a reference cast and a primitive cast visible.
</p>Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-6943366.post-1112735843406888702005-04-05T14:12:00.000-07:002005-04-05T14:17:23.406-07:00Java Surprise 2: Another Example<p>
While browsing through the <a href="https://svn.cs.uu.nl:12443/repos/StrategoXT/java-front/trunk/test/v1.5/expressions.testsuite">micro testsuites</a> of Java-front, I was remembered of another typical example:
</p>
<pre>
int x = (int) ++y;
int x = (Integer) ++y;
</pre>
<p>The first statement is allowed. The second is not.</p>
<p>(See the first post on Java Surprise 2 for an explanation)</p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-6943366.post-1112734034648787482005-04-05T13:42:00.000-07:002005-04-05T14:23:38.966-07:00Java Surprise 2: Cast Priority<p>
I promised a Java Surprise series. Thanks to my full-time job this
promise is not hard to remember: every few days there is a
fresh surprise for me ;) . The second surprise in this series is
actually one a discovered last summer, so I'm cheating a bit. If you
know me in real life, then I've probably already bothered you with
this one.
</p>
<p>
First of all: please take a seat. Are you sitting comfortably?
Excellent. Did you know that the syntactical priority of a cast to a
primitive type is different from the cast to a reference type? Well,
it is. Most likely, you will never encounter this, but it is not hard
to find an example that will surprise you.
</p>
<p>
You are probably familiar with autoboxing in Java 1.5. In short,
autoboxing can convert primitive types (such as <code>int</code>) to
reference types (such as <code>Integer</code>) for you if
necessary. Hence, you can assign an <code>int</code> to an
<code>Integer</code> and you can also cast an <code>int</code> to an
<code>Integer</code>. Some (correct) statements:
</p>
<pre>
Integer x = 3;
Integer y = (Integer) 3;
int z = (Integer) 3;
</pre>
<p>
I'm going to abuse your familiarity with autoboxing to show how weird
it can be that the priority of primitive casts is different from
reference casts. The following program is a correct program that
includes a (redundant) cast to an <code>int</code>.
</p>
<pre>
public class JavaSurprise2 {
public static void main(String[] ps) {
int y = (int) - 2;
System.out.println(String.valueOf(y));
}
}
</pre>
<p>
Compile and run:
</p>
<pre>
martin@logistico:~/tmp> javac JavaSurprise2.java
martin@logistico:~/tmp> java JavaSurprise2
-2
</pre>
<p>
Well, that looks great. Now, let's replace the <code>int</code> with
an <code>Integer</code>.
</p>
<pre>
public class JavaSurprise2 {
public static void main(String[] ps) {
int y = (Integer) - 2;
System.out.println(String.valueOf(y));
}
}
</pre>
<p>
Compile ...
</p>
<pre>
martin@logistico:~/tmp> javac JavaSurprise2.java
JavaSurprise2.java:4: cannot find symbol
symbol : variable Integer
location: class JavaSurprise2
int y = (Integer) - 2;
^
JavaSurprise2.java:4: illegal start of type
int y = (Integer) - 2;
^
2 errors
</pre>
<p>
What the heck? Cannot find symbol? Let's give it a symbol ...
</p>
<pre>
public class JavaSurprise2 {
public static void main(String[] ps) {
int Integer = 3;
int y = (Integer) - 2;
System.out.println(String.valueOf(y));
}
}
</pre>
<p>
Compile and run ...
</p>
<pre>
martin@logistico:~/tmp> javac JavaSurprise2.java
martin@logistico:~/tmp> java JavaSurprise2
1
</pre>
<p>
So, what happens? Of course the compiler is right. As I said in the
beginning, the priority of a cast to a primitive type is different
from a reference type. Because of the priorities defined in the Java
Language Specification, the <code>int</code> example is parsed as a
cast. However, the <code>Integer</code> version is parsed to an
expression name: an expression that can be referred to using a name
(aka variable). The Java compiler will never come back to this
decision to make it a cast anyway: syntactical choices are always
committed.
</p>
<p>
I can illustrate these different parses using <a href="http://www.strategoxt.org/Stratego/JavaFront">Java-front</a>, a
package that provides a Java parser that is generated from a
declarative syntax definition for Java in <a href="http://www.syntax-definition.org">SDF</a> (yes, I'm the developer: marketing intended ;) )
</p>
<pre>
martin@logistico:~/tmp> echo "(Integer) - 2" | parse-java -s Expr
Minus(ExprName(Id("Integer")),Lit(Deci("2")))
martin@logistico:~/tmp> echo "(int) - 2" | parse-java -s Expr
CastPrim(Int,Minus(Lit(Deci("2"))))
</pre>
<p>
Or in terms of XML:
</p>
<pre>
martin@logistico:~/tmp> echo "(Integer) - 2"
| parse-java -s Expr | aterm2xml --implicit
<Minus>
<ExprName><Id>Integer</Id></ExprName>
<Lit><Deci>2</Deci></Lit>
</Minus>
martin@logistico:~/tmp> echo "(int) - 2"
| parse-java -s Expr | aterm2xml --implicit
<CastPrim>
<Int/>
<Minus>
<Lit><Deci>2</Deci></Lit>
</Minus>
</CastPrim>
</pre>
<p>
Surprised? You'd better be!
</p>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-6943366.post-1112083803145981112005-03-28T23:55:00.000-08:002005-04-05T13:01:37.500-07:00Java Surprise 1: Overloading and Inner Classes<p>
As some of you might know, I'm working on implementing components of a Java compiler in Stratego. Obviously, I have to study the Java Language Specification in great detail for that. I had the impression that I knew a lot a about the Java language, but I still learn a lot of new details. Some of these details are funny, some are not. I've already encountered a lot of these causes and I'll try to blog about them from now.
</p>
<p>
My first post in this series is about this fragment:
</p>
<pre>
class Foo {
void f(String s) {}
class Bar {
void f(int x) {}
class Fred {
void g() { f("aaa"); }
}
}
}
</pre>
<p>
Did you know that you cannot overload the method <code>f</code> in this way?
</p>
<p>
The reason for this is that the specification separates method invocation in a few phases. The first compile-time phases determines the class to search for the method to invoke. For a plain method invocation (just and identifier), the JLS specifies that the class to search for methods is the <em>innermost</em> type declaration that has a method of that name. In this case, this will be the class <code>Bar</code>. Hence, the later phases that handle method overloading will not consider the method <code>f</code> that takes a String argument.
</p>
<p>
I don't think I've encountered this issue during my Java programming. Did you?
</p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-6943366.post-1101151162725867032004-11-22T11:09:00.000-08:002004-11-22T11:19:22.726-08:00Lexical Macros are Bad<p>
<a href="http://arthurvd.blogspot.com/">Arthur van Dam</a> just created this nice picture, with a clear statement, for me:
<p>
<center>
<img src="http://losser.st-lab.cs.uu.nl/~adam/lexical_macros_are_bad.jpg" width="400">
</center>
<p>
It's going to be featured in our discussion of <a href="http://www.brics.dk/RS/00/24/">Growing Languages with Metamorphic Syntax Macros</a>, this Tuesday as part of the Software Generation and Configuration course I mentioned before. Thanks Arthur!
</p>
Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-6943366.post-1101053705786179272004-11-21T08:05:00.000-08:002004-11-21T08:19:34.516-08:00Meta Blog: How Do I Look? <p>
Last week I changed the look of my <a
href="http://www.cs.uu.nl/groups/ST/Martin/WebHome">homepage</a>
(which is a Wiki) to a style derived from Blogger's Rounders 3
template, which was designed by <a
href="http://www.stopdesign.com">Douglas Bowman</a>. Of course, I
wanted to change the look of my blog as well; what you see now is
the result of this. I hope you like it. As you can see, I'm fond
of the combination of shades of blue and gray ;) .
</p>
<p>
The sources of my blog template are <a href="https://svn.cs.uu.nl:12443/repos/mbravenboer/blog/">available</a> from my Subversion repository. The <a href="https://svn.cs.uu.nl:12443/repos/mbravenboer/BlueBoxSkin/">sources of the Wiki skin</a> for my homepage are there as well. Feel free to use it.
</p>Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-6943366.post-1100683238319592642004-11-17T01:58:00.000-08:002004-11-17T02:40:01.996-08:00Paper of the Day <p>
Yesterday, I read the article <a
href="http://www2.parc.com/csl/groups/sda/publications/papers/Kiczales-IMSA92/for-web.pdf">"Towards
a New Model of Abstraction in Software Engineering"</a> by Gregor
Kiczales. We are going to discuss this paper tomorrow (Thursday)
in our <a href="http://www.cs.uu.nl/groups/ST/Sgc/WebHome">master
seminar on software generation and configuration</a>. I'm not
really convinced that aspect-oriented programming (as it is
currently implemented in AspectJ) is the way to go, but this
earlier article is brilliant!
</p>
<p>
The problem with abstraction is very well described: abstractions
cannot hide their implementations. The need for a separation of
meta-level interfaces from base interfaces is entirely clear after
reading this paper. The papers immediately reminded me of <a
href="http://www.joelonsoftware.com/articles/LeakyAbstractions.html">The
Law of Leaky Abstractions</a>. The law introduced in this
excellent article by Joel Spolsky is cited quite
frequently. However, the credits for identifying this problem (and
suggesting a solution!) should go to this article by Gregor
Kiczales. I think that many of the ideas expressed in his article
are still not realized and researched thorough enough.
</p>
<p>
Another interesting thing to note is that annotations and
attributes as they are available in C# and Java are not really
that novel. Until now, it was unclear to me where the idea of
attributes in C# actually came from. I think other people have
this problem as well, since the idea of adding attributes to
source code is often described as being truly novel. After
reading more about <a
href="http://www.cs.uu.nl/groups/ST/Sgc/OpenCompilers">metaobject
protocols</a>, it seems that annotations are nothing more than
what was already available in the earliest MOP systems. Why has
this link never been explained? Or did I miss something?
</p>Unknownnoreply@blogger.com2