# Planet Primates

## April 19, 2014

### Fefe

#### Ich werde ja hier bei Sicherheitslücken-Posts regelmäßig ...

Ich werde ja hier bei Sicherheitslücken-Posts regelmäßig von ein paar Verstrahlten heimgesucht, die mir zu erklären versuchen, dass das alles Schuld von C sei, nicht der Programmierer oder weil die Aufgabe halt komplex ist. Na dann gucken wir doch mal, was passiert, wenn man Crypto mit Java macht. Oh. Der Schlüssel leakt, weil OO-Programmierung mit Exception-Handling einen Timing-Side-Channel exponiert? Na sowas!

Fakt ist: OpenSSL ist nicht in C, weil C so geil ist. OpenSSL ist in C, weil das Ziel ist, dass das von jeder Programmiersprache aus benutzbar ist. Und alle können sie C-Code aufrufen.

Fakt ist auch: Das ist ein komplexes Problem, SSL richtig zu implementieren. Und wenn man dann noch den Anspruch hat, alle Obskuro-Plattformen und alle Extensions zu unterstützen, dann ist das Ergebnis halt Scheiße. Wenn man sich anguckt, was OpenBSD da gerade mit der flammenden Machete alles aus OpenSSL herausoperiert, da wird einem ganz anders. Unter anderem Support für Big Endian x86_64 Plattformen. Hat jemals schon mal jemand von sowas gehört? Nicht? Wie ist es mit VMS? EBCDIC-Support? Support für "ungerade" Wortlängen? You name it, OpenSSL supports it.

Dieses Java-Ding ist übrigens auch in einer Dissertation von einem der Beteiligten drin. Die kann man hier lesen (208 Seiten).

#### Medikamentenengpässe, sowas erwartet man vielleicht ...

Medikamentenengpässe, sowas erwartet man vielleicht von Afrika oder vielleicht Indien, die stampfen da ja extra eigene Generika-Herstellung aus dem Boden deshalb. Weniger bekannt ist, dass das auch in Deutschland ein echtes Problem ist. Ursache ist (neben konkreten Anlässen wie dass Glaxosmithkline in ihrer Fabrik in Belgien irgendwelche Probleme hat), dass der Markt im Wesentlichen zu Monopolen degeneriert ist.
Für die Deutsche Akademie für Kinder- und Jugendmedizin sind solche Lieferengpässe Ausdruck eines grundsätzlichen Problems: die Monopolisierung auf dem Markt der Impfstoffproduktion. Es gebe immer weniger Hersteller, die Produktvielfalt gehe zurück. Die Folge: Bei Lieferengpässen oder einer Marktrücknahme könne oftmals nicht auf ein Alternativpräparat zurückgegriffen werden, und es komme zu einer aus ethischer Sicht bedenklichen Priorisierung.
Wie gruselig ist DAS denn bitte!? Au weia.

Jetzt frage ich mich ja gerade, ob es da Bunkerkäufe von Spekulanten gab, oder ob es da wirklich schon per se nicht genug Rücklagen gibt, um die Nachfrage für die nächsten Monate zu befriedigen? Das kann ja wohl nicht wahr sein!

### /r/clojure

#### How to setup CLJS + Emacs with a REPL for client-side dev only

How do I setup Emacs and CLJS

I'm working on a d3 app, only client-side code (ATM). I'm trying to setup a REPL that connects with the browser in Emacs -> I found cider, and austin. Not sure how they connect, if at all. Internet research shows that all projects involving cljs have cljsbuild in the project.clj. Not sure how that connects with cider and austin too.

I just want to write code, send it to eval to the repl and get going. I'd rather that then compiling the CLJS each time I save the file and refreshing the browser afterwards.

Scourging the internets in the past 3 days for a tutorials, guides, documentation and blog posts, trying I don't know how many solutions didn't yield anything. Just couldn't make this happen on my own.

How do I get this up?

submitted by xwildeyes

### CompsciOverflow

#### How to prove correctness of BFS algorithm

How do we prove the correctness of BFS of DFS algorithms for finding connected components in the graph?

I have came up with a traversal algorithm which is very similar to these algorithms, but I need to prove its correctness.

Any reference would be appreciated.

### StackOverflow

#### Solved: How to convert list of hashmaps into one hasmap in clojure?

I have a list which looks like this:

({:course 2, :mark 9} {:course 5, :mark 8} {:course 6, :mark 10})


And i want to convert it to hashmap:

{:2 9 :5 8 :6 10}


List was created from mysql database, i dont know can i get that datas from database in some other format, which will be easier to convert to one hashmap, i used java.jdbc query function.

Can anybody help me?

EDIT: I solved this by

(let [myHashMap (into {} (map #(hash-map (keyword (str (first %))) (first (rest %))) (map rearrangeList listOfMarksFromDatabase))) ])


#### How to run specifications sequentially

I want to create few specifications that interoperate with database.

class DocumentSpec extends mutable.Specification with BeforeAfterExample {
sequential

def before() = {createDB()}
def after() = {dropDB()}

// examples
// ...
}


Database is created and dropped before and after every example (which is executed sequentially). Everithing works as expected until there is only one spec that works with database. Because specifications are executed parallel, they interfere and fail.

I hope that I'm able to avoid this by instructing specs2 to run tests with side effects sequentially while keeping side effect free tests to run in parallel. Is it possible?

### Overcoming Bias

#### The Up Side Of Down

In her new book, The Up Side of Down: Why Failing Well Is the Key to Success, Megan McArdle takes some time to discuss forager vs. farmer attitudes toward risk.

Forager food sources tended to be more risky and variable, while farmer food sources are more reliable. So foragers emphasized food sharing more, and a tolerate attitude toward failure to find food. In contrast, farmers shared food less and held individuals responsible more for getting their food. We’ve even seen the same people switch from one attitude to the other as they switched from foraging to farming. Today some people and places tend more toward farmer values of strict personal responsibility, while other people and places tend more toward forager forgiveness.

McArdle’s book is interesting throughout. For example, she talks about how felons on parole are dealt with much better via frequent reliable small punishments, relative to infrequent random big punishments. But when it comes to bankruptcy law, a situation where the law can’t help but wait a long time to respond to an accumulation of small failures, McArdle favors forager forgiveness. She points out that this tends to encourage folks who start new businesses, which encourages more innovation. And this does indeed seem to be a good thing.

Folks who start new businesses are pretty rare, however, and it is less obvious to me that more leniency is good overall. It is not obvious that ordinary people today face more risk than did most farmers during the farming era. The US apparently has the most lenient bankruptcy law in the world, and that is indeed some evidence for its value. However, it seems to me more likely that US forager forgiveness was likely caused by US wealth. McArdle tells us that the US got lenient bankruptcy in the late 1800s via lobbying by senators representing western farmers in debt to eastern banks. And it is hard to see how farming in the US west was more risky than has been farming throughout the whole farming era.

Most likely what changed was the wealth of US farmers, and their new uppity attitudes toward rich elites. This fits with debt-forgiveness being a common liberal theme, which fits with liberal attitudes being more forager-like, and becoming more common as rising wealth cut the fear that made farmers. If lenient bankrupts is actually better for growth in our world, this seems another example of Caplan’s idea trap, where rising wealth happens to create better attitudes toward good policy.

Overall I found it very hard to disagree with anything that McArdle said in her book. If you know me, that is quite some praise.

### QuantOverflow

#### How is the default probability implied from market implied CDS spreads for CVA/DVA calculation?

From point 38 on P.17 the default probability can be implied from market implied CDS spreads. "Macro Surface" method is mentioned, but I cannot get any clue of what it is? Where do I get the acedemic reference for that?

Also what is the commonly used methodology to imply default probability for CVA/DVA calculation?

The article "Credit and Debit Valuation Adjustment" can be seen in http://www.ivsc.org/sites/default/files/IVSC%20CVA%20-DVA%20%20ED_0.pdf

### TheoryOverflow

#### Proving a language (ir)regular (standard methods have failed)

I'm currently trying to prove a language regular (for personal amusement). The language is:

The language containing all numbers in ternary that have even bit-parity when encoded in binary.

Now, I've currently tried a few different approaches that have not led me to any success. I've tried using the pumping lemma (couldn't find anything to pump on), Myhill-Nerode (similar) and even counted the number of strings of each length for which the statement is true (my intuition is that it checks out with a probabilistic argument).

Are their any other approaches that might help here, or are there any intuitions that might be helpful? At this point, my best guess is that the language is not regular, but I don't seem to be able to come up with an explanation.

### StackOverflow

#### Play test-only doesn't ignore tests

I have a play application and I need to ignore my functional tests when I compile and build, then later run only the integration tests.

This is my test code:

ApplicationSpec.scala

@RunWith(classOf[JUnitRunner])
class ApplicationSpec extends Specification {

"Application" should {
"send 404 on a bad request" in new WithApplication {
route(FakeRequest(GET, "/boum")) must beNone
}

}


IntegrationSpec.scala

@RunWith(classOf[JUnitRunner])
class IntegrationSpec extends Specification {

"Application" should {
"work from within a browser" in {
running(TestServer(9000), FIREFOX) { browser =>
browser.goTo("http://localhost:9000/app")
browser.pageSource must contain("Your new application is ready.")
}
}
} section "integration"
}


The docs tells me I can use something like this from the command line:

play "test-only -- exclude integration"

The only problem is that this doesn't actually exclude any tests and my integration tests invoke firefox and start running. What am I doing wrong? How can I exclude the integration tests and then later run them by themselves?

#### Using Scala class defined in package object from Java

The following Scala example defines a class inside a package object:

package com.mycompany

package object test {
class MyTest {
def foo(): Int = {
42
}
}
}


The following three classes are generated:

com/mycompany/test/package.class
com/mycompany/test/package$.class com/mycompany/test/package$MyTest.class


The problem arises when trying to use the MyTest class from Java. I think that since package$MyTest contains a$ in the name, Java is not acknowledging its existence. Nevertheless, the package$ class is accessible. Running javap on package$MyTest.class returns:

Compiled from "Test.scala"
public class com.mycompany.test.package$MyTest { public int foo(); public com.mycompany.test.package$MyTest();
}


I've tried accessing the class using Eclipse, Intellij and Netbeans, without success. Is it possible to use Scala classes defined in package objects from Java?

#### Is there some ways to avoid this kind of duplicated code in clojure?

I got some parameters and then call a function which accept a map of parameters. The key's names of the map are the params' names,like this:

   (GET "/api/search" [nick_name gender phone max_age min_age page lmt ]
(db-search-users :nick_name nick_name :gender gender :phone phone
:max_age max_age :min_age min_age :page page :lmt lmt))


is there some way to avoid the copy and paste?

### DragonFly BSD Digest

#### In Other BSDs for 2014/04/19

I’ve got “coverage” of most every BSD this week.

### StackOverflow

#### leftReduce Shapeless HList of generic types

This is essentially what I want:

case class Foo[T](x: T)

object combine extends Poly {
implicit def caseFoo[A, B] = use((f1: Foo[A], f2: Foo[B]) => Foo((f1.x, f2.x)))
}

def combineHLatest[L <: HList](l: L) = l.reduceLeft(combine)


So combineHLatest(Foo(1) :: Foo("hello") :: HNil) should yield Foo( (1, "hello") )

The above doesn't compile as it cannot find an implicit LeftReducer but I'm at a loss as to how to implement one.

### TheoryOverflow

#### Calculating exact/approximate solution to a formula

Suppose we have a set of variable $\mathbf{y} = \left(y_1, ..., y_n \right)$. Also consider the set of functions $g_i(y_i), 1 \leq i \leq n$. Note that $g_i()$ is dependent only on $y_i$.

Consider the program: $$\sum_{\mathbf{y} \in C} \prod_{i=1}^n g_i(y_i)$$ Where $C$ is the feasible space for $\mathbf{y}$. The question is how to find this equation(or an approximation to it) efficiently?

Note that naively computing this is $m^n$ (assuming that each $y_i$ can take $m$ possible values). To make it more clear consider the following example: $$\sum_{(y_1, y_2) \in C} g_1(y_1) g_2(y_2)$$

You can make the following assumptions (but not limited to):

• $\mathcal{C}$ is be represented with a linear constraint: $$A\mathbf{y} \leq b$$
• $g_i(y_i)$ is bounded above with some number $M$.
• g_i(y_i) is a convex/super-convex function
• Or any other necessary assumption that can help to solve or approximate this.

Update1: It might be useful to know that $$\ln \sum_{\mathbf{y} \in C} \prod_{i=1}^n g_i(y_i) \leq \sum_{\mathbf{y} \in C} \ln \prod_{i=1}^n g_i(y_i) = \sum_{\mathbf{y} \in C} \sum_{i=1}^n \ln g_i(y_i)$$

### Fred Wilson

#### Video Of The Week: The Gotham Gal on TWIST

Last summer, The Gotham Gal went on Jason Calacanis’ show, This Week In Startups. I had never watched it until this morning. It’s fun to see two people who know each other well (they worked together in the late 90s) do a conversation. It’s an hour long but there is some good stuff in here.

### Fefe

#### Kurzer Redebeitrag der CDU/CSU im EU-Parlament:"Ich ...

Kurzer Redebeitrag der CDU/CSU im EU-Parlament:
"Ich kann nicht akzeptieren, dass wir keinen Entwurf mehr bekommen sollen", sekundierte Weber der CDU-Abgeordnete Axel Voss. Die Bedrohungslage sei "eher noch größer geworden" seit der Kernarbeit an der Richtlinie 2005, als viele Politiker noch unter den Anschlägen auf den öffentlichen Nahverkehr in Madrid und London standen. Neben dem Recht auf Freiheit gebe es auch eins auf Sicherheit in der Charta der Grundrechte, unterstrich Voss. Zudem sei die Strafverfolgung ohne Verbindungs- und Standortdaten "in vielen tausend Fällen nicht mehr möglich". Der Christdemokrat warf daher die alternative Frage auf, ob "wir zu Selbstjustiz übergehen sollen".
Tolle Idee! Genau so machen wir das! Wir sperren am besten die CDU-Fraktion weg und berufen uns auf ihr selbst vorgetragenes Recht auf Sicherheit und die von ihr selbst vorgeschlagene Selbstjustiz als Methode.

### /r/compsci

#### How to get into the field of AI?

Hi,

I am a first year EE/CS double major interested in AI and machine learning and would like to get involved in the field.

I would like to know which undergrad courses would prepare me best for the field of AI. My program is very flexible, I can do subjects from EE, CS, maths or economics. So it is up to me to structure my degree. I am thinking about doing the following:

Maths: graph theory and linear algebra, multivariable calculus, differential equations, real analysis, complex analysis, discrete maths and probability and statistics. CS: design and analysis of algorithms, algorithms and data structures, object oriented programming, database systems, theory of computation, computer systems, intro to artificial intelligence and machine learning, software simulations and CS project. EE: all of them. Economics: 4 subjects in economics.

Do you think these courses are a good way to prepare myself for AI? What else should I do?

I am very passionate about AI and have been interested in the field for a long time. I have read book "AI - A Modern Approach" and it made me love the field ever more.

submitted by member2357

### StackOverflow

#### Infer multiple generic types in an abstract class that should be available to the compiler

I am working on an abstract CRUD-DAO for my play2/slick2 project. To have convenient type-safe primary IDs I am using Unicorn as additional abstraction and convenience on top of slicks MappedTo & ColumnBaseType.

Unicorn provides a basic CRUD-DAO class BaseIdRepository which I want to further extend for project specific needs. The signature of the class is

class BaseIdRepository[I <: BaseId, A <: WithId[I], T <: IdTable[I, A]]
(tableName: String, val query: TableQuery[T])
(implicit val mapping: BaseColumnType[I])
extends BaseIdQueries[I, A, T]


This leads to DAO implementations looking something like

class UserDao extends
BaseIdRepository[UserId, User, Users]("USERS", TableQuery[Users])


This seems awfully redundant to me. I was able to supply tableName and query from T, giving me the following signature on my own Abstract DAO

abstract class AbstractIdDao[I <: BaseId, A <: WithId[I], T <: IdTable[I, A]]
extends BaseIdRepository[I,A,T](TableQuery[T].baseTableRow.tableName, TableQuery[T])


Is it possible in Scala to somehow infer the types I and A to make a signature like the following possible? (Users is a class extending IdTable)

class UserDao extends AbstractIdDao[Users]


Is this possible without runtime-reflection? If only by runtime-reflection: How do i use the Manifest in a class definition and how big is the performance impact in a reactive Application?

Also, since I am fairly new to the language and work on my own: Is this good practice in scala at all?

Thank you for help. Feel free to criticize my question and english. Improvements will of course be submitted to Unicorn git-repo

EDIT: Actually, TableQuery[T].baseTableRow.tableName, TableQuery[T] does not work due to the error class type required but T found, IDEA was superficially fine with it, scalac wasn't.

#### How to upgrade Scala to a newer version from the command line?

Is there away to upgrade the installed Scala version via sbt / other command line tool?

I'm sure there is a way, but I couldn't find any after a quick search, am I missing anything?

### /r/compsci

#### What is an appropriate textbook for this mathematics for CS course?

I'm using this course to help learn mathematics for Computer Science. Even the first couple of lectures have made me see maths in a different light. I find it to be totally awesome and interesting now. Problem is, for some subjects, I feel like a textbook would be useful. I can't find one on the site, and I don't know where I could find one, so I'm asking you guys. Thanks!

submitted by a-single-tear

### QuantOverflow

#### Calculating instantaneous forward rate from zero-coupon yield curve

I have a big dataset containing zero-coupon bond yields with different relative maturities. I fix a time horizon on my dataset and I want to calculate instantaneous forward rate. I'm going to write how I calculated:

The yield curve is given by: $Y(t,T)=-\frac{\log(P(t,T))}{T-t}$ formula.

So by inverting it we get bondprice:

$P(t,T)=\exp(-Y(t,T)(T-t))$

We get instantaneous forward rate from partial derivate of $\log(P(t,T))$ by $T$ so the formula I use is:

$f(t,T_k)=-\frac{\log(P(t,T_k))-\log(P(t,T_{k-1}))}{T_k-T_{k-1}}$.

where $T_0=0$.

My goal is to set up an observation matrix of instant. forward rates for volatility estimation in a model and I want to be sure if my pre-calculations are fine. Thanks for help in advanced.

### StackOverflow

#### Scala: case class defaults values and equals/hashCode: Point ≠ Point(0,0)

why are these sets are all different?

case class Point(x:Int = 0, y:Int = 0)

Set(Point, Point)               // Set(Point)
Set(Point, Point(0,0))          // Set(Point, Point(0,0))
Set(Point(0,0), Point(x=0,y=0)) // Set(Point(0,0), Point(0,0))


set equality is false too.

i would think that even with defaults, equals and hashCode would depend on the values, not on the string or something.

#### How to instantiate trait which extends class with constructor

Have a code:

class A(name:String)
trait E extends A

new E{}  //compile error


Is such inheritance possible? Tried to create val or def in the body of anonymous class, doesn't help.

### Fefe

#### Aktuelles Highlight der Propagandaschlacht zwischen ...

Aktuelles Highlight der Propagandaschlacht zwischen Russland und der Ukraine:
Der Mann, der sich nach der Besetzung des Polizeireviers in der ostukrainischen Stadt Gorlowka durch die Selbstverteidigung Mitte April als ein „Oberstleutnant der russischen Armee“ vorgestellt hat, ist ein ukrainischer Friedhofsdieb. Das meldete die ukrainische Agentur UNIAN am Mittwoch unter Berufung auf das Internetportal Ostrow.

### StackOverflow

#### Review for database throttling trait based on play slick plugin

I implemented a throttled database service trait for wrapping my service code in a future, supplying a slick session and throttling the # of requests in accordance to the length of the thread pool queue.

My main reason for this was to transfer the responsibility for database session / transaction handling away from the controllers into the service layer of the application.

However, since I am fairly new to play and scala in general and I am working alone, I would really like some input regarding my code. Is it a good practice at all? Am I about to face any negative performance implications? Are there ways to optimize / refactor my code?

The code can also be found on Pastebin. Thanks you for your input!

import daos.exceptions.TabChordDBThrottleException
import scala.concurrent._
import play.api.db.slick

/**
* Implement this trait to get a throttled database session Wrapper
*/
trait ThrottledDBService {
/** Override to use a different Application */
protected def app = slick.Config.app

/** Override to use different Databasename*/
protected def dataBaseName = slick.Config.defaultName

protected object DBConfiguration {
private def buildThreadPoolExecutionContext(minConnections: Int, maxConnections: Int) = {
0L, TimeUnit.MILLISECONDS,
}

val partitionCount = app.configuration.getInt(s"db.$dataBaseName.partitionCount").getOrElse(2) val maxConnections = app.configuration.getInt(s"db.$dataBaseName.maxConnectionsPerPartition").getOrElse(5)
val minConnections = app.configuration.getInt(s"db.$dataBaseName.minConnectionsPerPartition").getOrElse(5) val maxQueriesPerRequest = app.configuration.getInt(s"db.$dataBaseName.maxQueriesPerRequest").getOrElse(20)
}

/** A predicate for checking our ability to service database requests is determined by ensuring that the request
queue doesn't fill up beyond a certain threshold. For convenience we use the max number of connections * the max
# of db requests per web request to determine this threshold. It is a rough check as we don't know how many
queries we're going to make or what other threads are running in parallel etc. Nevertheless, the check is
adequate in order to throttle the acceptance of requests to the size of the pool.
@see https://github.com/TechEmpower/FrameworkBenchmarks/pull/124
*/
protected def isDBAvailable: Boolean = {
val dbc = DBConfiguration
dbc.threadPool.getQueue.size() < (dbc.maxConnections * dbc.maxQueriesPerRequest)
}

/**
* Wraps the block with a Future in the appropriate database execution context and slick session,
* throttling the # of request in accordance to the rules specified in the db config and.
*
* Terminates the future with a @TabChordDBThrottleException in case of queue overload
* @param body user code
* @return Future of the database computation
*/
protected def throttled[A](body: slick.Session => A):Future[A] = {
future{
if(isDBAvailable){
slick.DB(dataBaseName)(app).withSession {
s =>
body(s)
}
} else {
throw new TabChordDBThrottleException("Too many Requests pending in Threadpool")
}
}(DBConfiguration.executionContext)
}

/**
* Wraps the block with a Future in the appropriate database execution context and slick transaction,
* throttling the # of request in accordance to the rules specified in the db config and.
*
* Terminates the future with a @TabChordDBThrottleException in case of queue overload
* @param body user code
* @return Future of the database computation
*/
protected def throttledTransaction[A](body: slick.Session => A): Future[A] = {
throttled {
s => s.withTransaction {
body(s)
}
}
}
}


### TheoryOverflow

#### What are some of the active CS blogs which one should follow?

Currently, many top notch researchers (or their groups) maintain active blogs. These blogs keep us updated with latest research in their field of interest. In most of the cases, it is easy to understand the blog articles when compared to their corresponding papers, since they don't go in the gory details and also reason out their intuitions (which is generally missing in a paper).

Hence it would be useful to have a a list of such blogs. Please categories your answers and if possible mention a line or two highlighting the reason you like the particular blog.

### StackOverflow

#### Filter elements from two lists

Basically I have two lists something like

L1 = [one, two, three, five, eleven, million]
L2 = [five, million]


so I want to filter the elements from the second list L2

to get

[one, two, tree, eleven]


I have use the foldl function to loop the L1 and then a foreach loop to decide the element to append comparing from the list 2 but i do not seem to get a logic right: I have something like this

56 filter_a(L1,L2) ->
57    List = foldl(fun(X,A) ->
58                 L = lists:foreach(fun(E) ->
59                             case E =:= X of
60                                 true ->
61                                     [];
62                                 _->
63                                     X
64                             end
65                     end, L2),
66                 lists:append([A,[L]])
67         end, [], L1),
68    List.


How can i do this in an easy way?

#### Incanter sample mean and variance not close to distribution mean and variance

I answered a question regarding generating samples with a positive support and known mean and variance using the gamma distribution in NumPy. I thought I'd try the same in Incanter. But unlike the results I got with NumPy, I could not get a sample mean and variance close to the distribution's mean and variance.

(defproject incanter-repl "0.1.0-SNAPSHOT"
:description "FIXME: write description"
:url "http://example.com/FIXME"
:url "http://www.eclipse.org/legal/epl-v10.html"}
:dependencies [[org.clojure/clojure "1.6.0"]
[incanter "1.5.4"]])


(require '[incanter
[core]
[distributions :refer [gamma-distribution mean variance draw]]
[stats :as stats]])

(def dist
(let [mean 0.71
variance 2.89
theta (/ variance mean)
k (/ mean theta) ]
(gamma-distribution k theta)))


### Incanter calculates mean and variance of the distribution

(mean dist) ;=> 0.71
(variance dist) ;=> 2.89


### I calculate a sample mean and variance based on draws from that distribution

(def samples (repeatedly 10000 #(draw dist)))

(stats/mean samples) ;=> 0.04595208774029654
(stats/variance samples) ;=> 0.01223348345651905


I expected these stats calculated on the sample to be much closer to the mean and variance of the distribution. What am I missing?

Incanter has a bug in the implementation of the mean and variance methods. Just like me (and despite its documentation and naming) it is treating the second parameter as a shape parameter in mean and variance.

(defrecord Gamma-rec [shape rate] ; using defrecord since cdf was not matching up in unittest without switching ":lower-tail"
Distribution
(pdf [d v] (.pdf (Gamma. shape rate (DoubleMersenneTwister.)) v))
(cdf [d v] (Probability/gamma rate shape v)) ; TODO decide on :lower-tail
(draw [d] (cern.jet.random.tdouble.Gamma/staticNextDouble shape rate))
(support [d] [0,inf+])
(mean [d] (* shape rate))
(variance [d] (* shape rate rate))
)


These should be instead

  (mean [d] (/ shape rate))
(variance [d] (/ shape rate rate))


### TheoryOverflow

#### Kuratowski's graph planarity criterion

There is a short proof of Kuratowski's graph planarity criterion. But I don't understand the proof, completely. So I hope someone may help me with that proof.

Here is the short proof of Kuratowski.

What I am struggling with is the first lemma.

I would appreciate if someone may help me.

### StackOverflow

#### secure social websockets play framework

How do I secure a websocket call in the play framework using scala securesocial?

def statusfeed() = WebSocket.using[String] { implicit request =>
if (logged in) {
...
else {
}
}


Thanks,

edit 1:

Tried this but didn't work, always get "not logged in"

def statusfeed() = WebSocket.using[String] { implicit request =>
var in = Iteratee.ignore[String]
var out = Enumerator.empty[String]
session.get("userId").map { sessionId =>
//sessionId is now userId in session
//check he is authorise to view page or not
//  Ok("user in session")
def getLoadAverage = {
"%1.2f" format
}

in = Iteratee.ignore[String]
out = Enumerator.repeatM {
}

}.getOrElse {
log.info("not logged in")
//any thing that you want when he is not in session
Redirect("/") //NOT IN SESSION REDIRECT TO LOGIN PAGE
}
(in, out)
}


edit 2: Replacing userId with SecureSocial.USER_KEY didn't work either

### CompsciOverflow

#### Matrix equality up to row/column permutations problem name

Sorry for the trivial question; has the following decision problem an "official" (possibly short) name?

Given two $n \times m$ $\text{0-1}$ (binary) matrices $M_1, M_2$ check if they are the same up to row and column permutations.

(something like the short names used in complexity theory for decision problems: e.g. 3SAT, GI (Graph Isomorphism), X3C (Exact Cover By Three Set), CLIQUE, ...)

### StackOverflow

#### Comparing Scala and Java Double.NaN

Why does this comparison evaluate to true?

scala> Double.NaN equals java.lang.Double.NaN
res5: Boolean = true


But this one evaluates to false?

scala> Double.NaN == java.lang.Double.NaN
res6: Boolean = false


#### Datomic with friend authentication not working properly

I'm working on a clojure web application project for my college,and i am trying to connect datomic database with friend authentication but is kinda buggy...i will explain further...

First i am doing user registration(user insertion in datomic database) like this and it is working.

 (defn insert-user [firstname lastname email password sex date] (.get (.transact conn
[{
:db/id #db/id[:db.part/user -1000001]
:user/name firstname
:user/lastName lastname
:user/gender sex
:user/birthDate date}
]))

(resp/redirect "/")
)


The routes handler and friend authenticator looks like this...function main is to start the app.

(def page (handler/site
(friend/authenticate
routes

{:allow-anon? true
:unauthorized-handler #(-> (html5 [:h2 "You do not have sufficient privileges to access " (:uri %)])
resp/response
(resp/status 401))
:credential-fn (partial creds/bcrypt-credential-fn users)
:workflows [(workflows/interactive-form)]})

(wrap-keyword-params routes)
(wrap-nested-params routes)
(wrap-params routes)

))

(defn -main []
(run-jetty page {:port 8080 :join? false}))


And for the end the datomic query for users to match with creds/bcrypt-credential-fn function of friend.

(defn upit-korisnici []
(def temp (d/q '[:find ?u ?p
:where [?user :user/username ?u]
]
(d/db conn)))
(def users (into {} (map (fn [[k v]] [k {:username k :password v}]) temp)))
users
)


The thing that is bugging me and leaving me helpless is that when i register(insert user),the user is inserted in datomic database but when i try to log in i can't.It says wrong email and password but the new user is there.When i restart the whole app and try to login with the new users credentials it goes through and logs on.Does anyone know how to solve this problem?

#### Calculating Minimal Subset With Given Sum

I was doing a problem in Scala and this is the summary of the task statement:

There is a list of integers (of length N, 0 < N < 10^5) and another integer S (0 < S < 10^15). You are required to find the minimal size of the minimal subset of the given list of which the sum of elements (in the subset) is greater than or equal to S.

Input is given as below:
4
4 12 8 10
4
4 13 30 100

Output for above example:
1
2
3
-1

First line is length of array, the second is the array of integers (0 < A[i] < 10^9), the third is the number of test cases (0 < T < 10^5) and the fourth contains the S (for each test case).

Here's what I tried: The elements selected do not matter. So I sorted the list of given integers largest first. Then I selected the first one checked if its greater than S. If not, I selected the second element also, and so on until the sum becomes greater than or equal to S.

This algorithm works and I got many test cases correct. But I'm getting Time Limit Exceeded for some. If you can point out how I could make this faster or if there's a better way to do this, it would be much appreciated.

My code (Scala):

object Solution {
def main(args: Array[String]) {
val n = readInt()
val arr: Array[Long] = readLine().split(" ").map(_.toLong).sortWith(_ > _)

val sums: Array[BigInt] = new Array(n)
sums(0) = arr(0)
for(i <- 1 until n) sums(i) = sums(i-1) + arr(i)

val t = readInt()
for(i <- 1 to t) {
val s:BigInt = BigInt(readLong())
if(sums(n-1) < s)
println(-1)

else {
var i = 0
while(sums(i) < s) i += 1
println(i + 1)
}
}
}
}


#### scala, slick : where can I find exceptions raised by a given method?

I wonder where I can find the exceptions raised by such a code:

def readFromDB: String = {
db_sqlite_xml.withSession {
implicit db: Session =>
xmlQuery.first.text

}
}


I can't find it in the slick scaladoc (http://slick.typesafe.com/doc/2.0.1/api/#package); I searched the method "first" in the javadoc's tableQuery class, but without any success.

thanks.

olivier

### QuantOverflow

#### What are the good book to understand economics?

I am currently studying managerial economics by W. Bruce Allen. The book is good. Are there any good book with quizzes , that makes economics sweeter? Books that relate to real world facts and figure would be better.

### StackOverflow

#### problems with scallop and updating to scala 2.11

I tried to update to Scala 2.11.0-M5, but I've run into problems. I use scallop so I needed to build that with Scala 2.11.0-M5 because I could not find a prebuilt jar. The compile of scallop goes fine but when I try to run "sbt publish-local" I get the errors below when it is trying to build the documentation. To me this looks like it is trying to build some sbt source file. I tried to find newer source for sbt (or an sbt jar built with scala 2.11.0-M5), but could not. Can anyone offer any suggestions?

thanks very much!

[info] Generating Scala API documentation for main sources to /Users/jetson/develop/scala/scala-2.11/scallop/target/scala-2.11/api...
[info] Compiling 12 Scala sources to /Users/jetson/develop/scala/scala-2.11/scallop/target/scala-2.11/classes...
[info] 'compiler-interface' not yet compiled for Scala 2.11.0-M5. Compiling...
/var/folders/m9/fn_sw0s970q02nf8cng94j640000gn/T/sbt_1dff5778/CompilerInterface.scala:246: error: recursive method rootLoader needs result type
^
/var/folders/m9/fn_sw0s970q02nf8cng94j640000gn/T/sbt_1dff5778/CompilerInterface.scala:246: error: value rootLoader is not a member of scala.tools.nsc.backend.JavaPlatform
^
two errors found
[info] 'compiler-interface' not yet compiled for Scala 2.11.0-M5. Compiling...
/var/folders/m9/fn_sw0s970q02nf8cng94j640000gn/T/sbt_4baba5ae/CompilerInterface.scala:246: error: recursive method rootLoader needs result type
^
/var/folders/m9/fn_sw0s970q02nf8cng94j640000gn/T/sbt_4baba5ae/CompilerInterface.scala:246: error: value rootLoader is not a member of scala.tools.nsc.backend.JavaPlatform
^
two errors found
[error] (compile:doc) Error compiling sbt component 'compiler-interface'
[error] (compile:compile) Error compiling sbt component 'compiler-interface'
[error] Total time: 15 s, completed Oct 21, 2013 11:41:14 AM


#### how many threads do scala's parallel collections use by default?

When I call Array.tabulate(100)(i=>i).par map { _+ 1}, how many threads are being used?

Thanks

#### Convert java.util.IdentityHashMap to scala.immutable.Map

What is the simplest way to convert a java.util.IdentityHashMap[A,B] into a subtype of scala.immutable.Map[A,B]? I need to keep keys separate unless they are eq.

Here's what I've tried so far:

scala> case class Example()
scala> val m = new java.util.IdentityHashMap[Example, String]()
scala> m.put(Example(), "first!")
scala> m.put(Example(), "second!")
scala> m.asScala // got a mutable Scala equivalent OK
res14: scala.collection.mutable.Map[Example,String] = Map(Example() -> first!, Example() -> second!)
scala> m.asScala.toMap // doesn't work, since toMap() removes duplicate keys (testing with ==)
res15: scala.collection.immutable.Map[Example,String] = Map(Example() -> second!)


#### After upgrading specs2 to 2.10 matcher 'haveTheSameElementsAs' seems lost

Recently I have upgraded my specs2 to 2.10-2.3.11 and apparently after that compiler can't recognize matcher must haveTheSameElementsAs. I couldn't find where they moved it or similar matcher in specs2. Any idea where is it/ what I should use instead of that one?

### CompsciOverflow

#### totally ordered multicast with Lamport timestamp

I'm studying Distributed Systems and synchronization and I didn't catch this solution of totally ordered multicast with Lamport timestamps. I read that it doesn't need ack to deliver a message to the application, but

"It is sufficient to multicast any other type of message, as long as that message has a timestamp larger than the received message. The condition for delivering a message m to the application, is that another message has been received from each other process with a large timestamp. This guarantees that there are no more messages underway with a lower timestamp."

This is a definition from a book. I tried to apply this definition to an example but I guess that something is wrong.

### Example.

There are 4 processes and they multicast the following messages (second number in parentheses is timestamp) :
P1 multi-casts (m11, 5); (m12, 12); (m13, 14);
P2 multi-casts (m21, 6); (m22, 14);
P3 multi-casts (m31, 5); (m32, 7); (m33, 11);
P4 multi-casts (m41, 8); (m42, 15); (m43, 19).

Supposing that there are no acknoledgments, can I guess which messages can be delivered and which not? Based on definition, my guess is that only m11 and m31 can be delivered to the application, because all the other messages received will have a timestamp greater, but this seems very strange, and I think I didn't understand the delivery condition very well. I have an exam next week and in general I'd like to understand this mechanism.

### StackOverflow

#### default value for functions in parameters in Scala

I was learning and experimenting with Scala. I wanted to implement a function with generic type, which takes a function as a parameter and provides a default implementation of that function..

Now when I try it without the generic type, it works :

def defaultParamFunc(z: Int, y: Int)(f: (Int, Int) => Int = (v1: Int,v2: Int) => { v1 + v2 }) : Int = {
val ans = f(z,y)
println("ans : " + ans)
ans
}


this doesn't give any error

but when I try the same with generic type,

def defaultParamFunc[B](z: B, y: B)(f: (B, B) => B = (v1: B,v2: B) => { v1 + v2 }) = {
val ans = f(z,y)
println("ans : " + ans)
ans
}


I get the error :

[error]  found   : B
[error]  required: String
[error]  def defaultParamFunc[B](z: B, y: B)(f: (B, B) => B = (v1: B,v2: B) => { v1 + v2 }) = {
[error]                                                                               ^


Is the error because the compiler doesn't know if B type will be addable ? because when I just return v1 or v2 instead of v1 + v2, it works..

def defaultParamFunc[B](z: B, y: B)(f: (B, B) => B = (v1: B,v2: B) => { v1 }) = {
val ans = f(z,y)
println("ans : " + ans)
ans
}


If so, How to specify that the type given must be Numeric ? I tried replacing B with B : Numeric but still gives the same error

### TheoryOverflow

#### Equational Logic and First Order Predicate Logic

I am interested in using Equational Theories (ET) together with Equational Logic (EL) found in algebraic specification languages such as CafeOBJ . I wish to use ET+EL to represent and prove sentences in First Order Predicate Logic (FOPL). The advantage of such an approach is that one can easily map loose theories written pseudo-FOPL to more concrete theories which may have initial models (using views). Translating FOPL to EL seems to require auxiliary techniques such as Skolemization and sentences splitting. I am concerned that there maybe some FOPL sentences which cannot be represented using EL even using these auxiliary techniques. I am aware that in general EL is regarded as a sub-logic of FOPL and any valid EL theorem is a valid FOPL theorem (but not vice versa). Goguen and Malcolm1 and Goguen and Malcolm2 describe FOPL as the background for equational proof scores in OBJ which was a predecessor of CafeOBJ/Maude. They also provide general advice on how use EL to prove FOPL theorems.
I am using an example from COQ: 1.4 predicate Calculus which I have written as two CafeOBJ theories or loose specifications. They are my two attempts to represent FOPL in EL. I am not sure if the approaches are valid.

Here is the description of the relation R from COQ.

Hypothesis R_symmetric : $\forall x y:D, R x y \implies R y x$.
Hypothesis R_transitive : $\forall x y z:D, R x y \implies R y z \implies R x z$.
Prove that R is reflexive in any point x which has an R successor.
For any x and y, R x y implies R y x by symmetry, then by transitivity, we have R x x.
Symmetry and transitivity are not enough to prove reflexivity, we must also assume the x is related to something (e.g R x ? or R ? x exists).

Consider the 2 equations labelled PROPERTY in the module TRANSITIVE1 and EQUATION in the module TRANSITIVE2.

My questions are these:
Q1.
Does the PROPERTY equation and its reduction represent and prove reflexivity? The results of the reductions would appear to represent a proof.
Q2.
Does the reduction of the EQUATION in TRANSITIVE2 prove reflexivity? I use a form selective application of bi-directional rewriting. This form of rewriting is highly controller by the user. Space does not permit a full description, but in this case the condition in the EQUATION is executed first and the assumption R x Y is applied during the rewriting process. We could describe this as a manual proof with some machine support.
Q3.
How do these approaches differ? Form my web searches I get implicit and explicit as follows: The implicitly specification of function or relation asserts property that its value must satisfy. Implicit definitions take the form of a logical predicate over the input and result variables gives the result's properties. This approach seems to be distinct from the normal Peano style equational axioms (e.g. N + 0 = N). The PROPERTY approach seems to fit this description. Explicitly defined functions or relations are those where the definition can be used to to calculate an output from the arguments. The EQUATION approach seems to fit this description. Are these reasonable distinctions?

Regards, Pat
 **> This is a loose module describing all models where the equation labelled PROPERTY holds. mod* TRANSITIVE1 { **> One sort or type called D. [ D ] **> A constant which ensures that transitivity holds op Y : -> D **> The symmetric property is asserted by CafeOBJ's commutativity property. op R__ : D D -> Bool {comm} op P : D D D -> Bool vars x y z : D **> Right associativity of implies eq [PROPERTY] : P(x,z,y) = (R x y) implies (R y z) implies (R x z) . } **> Normal rewriting open TRANSITIVE1 . red P(x,Y,x) . -- Gives true close **> A loose module describing all models where the equation labelled EQUATION holds. mod* TRANSITIVE2 { [ D ] op R__ : D D -> Bool {comm} op P : D D D -> Bool vars x y z : D

 

**> Replace COQ implies triplet with CafeOBJ conditional equation, using the following: **> [A -> B -> C] = [(A & B) -> C] = [C = TRUE if (A & B)] ceq [EQUATION] : R x z = true if ((R x y) and (R y z)) . **> Normal rewriting cannot deal with extra variable y in the condition on the RHS. **> Hence user controlled rewriting required using start/apply commands. } open TRANSITIVE2 . **> Using start/apply op X : -> D . **> If any variable x is related to arbitrary constant X eq [e1] : R x X = true . **> Then x is related to itself. **> CafeOBJ's start/apply commands allow selective bi-directional rewriting start R x x . apply .EQUATION with y = X at term . apply reduce at term . -- Result true : Bool close 

### StackOverflow

#### What is the best piece of recursion code you have come across?

Cases where really complex logic was simplified because of recursion, cases where recursion is used in everyday programming

### CompsciOverflow

#### Deterministic Multi-tape Turing Machine construction

I'm trying to construct a deterministic multi-tape turing machine for the following language in order to show that $L$ is in $DTIME(n)$:

$$L = \{ www \mid w \in \{a,b\}^+ \}$$

I'm not sure how to get started. Any hints would be appreciated.

### StackOverflow

#### clojure: with-redefs doesn't work with clojure.core functions?

I've a question about with-redefs. The following example doesn't work as expected. In findmax, clojure.core/max is always called instead of the anonymous function in the with-redefs statement.

(defn findmax [x y]
(max x y))

(with-redefs (clojure.core/max (fn [x y] (- x y)))
(findmax 2 5))


When I make the following changes everything works as expected:

(defn mymax [x y]
(max x y))

(defn findmax [x y]
(mymax x y))

(with-redefs (my/max (fn [x y] (- x y)))
(findmax 2 5))


What am I doing wrong here?

#### Scala compilation error where it is not detecting the change in the index.html to refer to a new model Quote

I updated the Scala index view file according to this tutorial http://scala-ide.org/docs/tutorials/play/ and got the following error:

My code is as follows :

Application.scala

package controllers

import play.api._
import play.api.mvc._
import models.Quote

object Application extends Controller {

def index = Action {
Ok(views.html.index("Your new application is ready.",Quote("Citer les pensees des autres, c'est regretter de ne pas les avoir trouvees soi-meme.",
"Sacha Guitry")))
}

}


Quote.scala

package models
case class Quote(text: String, author: String)


index.scala

@(message: String, quote: models.Quote)

@main("Welcome to Play 2.1") {

<p>@quote.text<em> - @quote.author</em></p>

}


Play is running in auto reloading mode in the background using the following command

~ run

I cannot understand why I am getting this compile error. I even tried doing an eclipse Build All.

#### Using Thread.sleep() inside an foreach in scala

I've a list of URLs inside a List.

I want to get the data by calling WS.url(currurl).get(). However, I want add a delay between each request. Can I add Thread.sleep() ? or is there another way of doing this?

 one.foreach {
currurl => {

import play.api.libs.ws.WS
println("using " + currurl)
val p = WS.url(currurl).get()
p.onComplete {
case Success(s) => {
//do something
}
case Failure(f) => {
println("failed")
}
}
}


}

### CompsciOverflow

#### Proof that union of a regular and a not regular language is not regular

Let $L_1$ be regular, $L_1 \cap L_2$ regular, $L_2$ not regular. Show that $L_1 \cup L_2$ is not regular or give a counterexample.

I tried this: Look at $L_1 \backslash (L_2 \cap L_1)$. This one is regular. I can construct a finite automata for this ($L_1$ is regular, $L_2 \cap L_1$ is regular, so remove all the paths (finite amount) for $L_1 \cap L_2$ from the finite amount of paths for $L_1$. So there is a finite amount of paths left for this whole thing. This thing is disjoint from $L_2$, but how can I prove that the union of $L_1 \backslash (L_1 \cap L_2)$ (regular) and $L_2$ (not regular) is not regular?

### /r/clojure

#### compojure in production

Hello there

I have been building REST APIs in in compojure for a while now and have really enjoyed it. I have a project about to go into production and I just wanted to have a brief discussion about production best practices. For other projects, I have ssh'd into the server and just ran lein ring server-headless in a tmux session and left it at that. I'm sure there is a better way than this to have compojure run in production.

Any thoughts?

submitted by tgallant

### StackOverflow

#### Large array indexes scala

(Language: scala)

I have a problem where I want to iterate over 1 million numbers, but for some reason I get an arrayindexOutofbounds exception. The function I am using works perfectly for 100 000 numbers, but I get the exception if I add a zero.

There cannot be a problem with the array size, because I have built a sort of flex-array, where the array is about 1000 elements and each element consists of a list of elements.

So the problem looks something like this:

for (x <- 1 to 1000000) {
// Do a thing
}


Can for loops only handle a certain number of elements?

I have tried running the program with the "extra-space-flag"

I include the whole code below for reference, in case it makes a difference

object Problem14 {

class FlexArray (n : Int){
var array = new Array[List[Tuple2[Int, Int]]](n)
val size = n

for(x <- 0 until size) {
array(x) = List()
}

def insert (n : Int, value : Int) {
if (find(n) != -1) {
val i = n % size
array(i) = (n, value) :: array(i)
}
}

def read (i : Int) : List[Tuple2[Int, Int]] = {
(array(i))
}

def findAux (list : List[Tuple2[Int, Int]], n : Int) : Int = {
if (list == Nil) {
-1
} else {
val (num, value) = list.head
if (n == num) {
value
} else {
findAux(list.tail, n)
}
}
}

def find (n : Int) : Int = {
val i = n % size
findAux(array(i), n)
}
}

var accArray = new FlexArray(10000)

// denna funktion bör kallas med 1 som andra argument
def chainLength (n : Int, acc : Int) : Int = {
if (n == 1)
acc
else {
val value = accArray.find(n)
if (value != -1)
acc + value
else if (n % 2 == 0)
chainLength(n/2, acc+1)
else
chainLength(3*n+1, acc+1)
}
}

def main(args: Array[String]) {
var max = 0
var maxnum = 0

for (x <- 1 to 1000000) {
var value = chainLength(x, 1)
accArray.insert(x, value)
if (max < value) {
max = value
maxnum = x
}

println(maxnum + ": " + max)

}


}

### QuantOverflow

#### "Equivalent" data sets despite different numbers

Are the historical data sets of short term treasury bill rates considered the same as the historical data sets of savings account interest rates because by definition they are both risk free rates of returns?

### Planet Theory

#### This week in history

The great earthquake of 1906 struck San Francisco on April 18, around 5 in the morning. While the earthquake already caused a lot of damage, it was the subsequent fire that ravaged the city: the earthquake had broken the water pipes, and so it was impossible to fight the fire because the hydrants were not working. Except for the hydrant at Church and 20th, which saved my house and a good part of the mission. The hydrant is painted golden, and once a year, on the anniversary of the earthquake, the fire department repaints it and leaves a token of appreciation. (They actually do it at 5 in the morning.)

By the way, there are two faults that can cause earthquakes in the San Francisco Bay Area. One is (our stretch of) the San Andreas fault, which runs close to the ocean, and which caused the 1906 quake and the 1989 one, and which may not be an imminent risk given the energy released in 1989. The other is the Hayward fault, which runs near Berkeley. The Hayward fault had big earthquakes in 1315, 1470, 1630, 1725, and 1868, that is about every 100-140 years, with the last one being 146 years ago…

25 years ago on April 15, Hu Yaobang died. The day before his funeral, about 100,000 people marched to Tiananmen square, an event that led to the occupation of the square, and which culminated in what in mainland China used to be referred to as the “June 4 events,” and now as the “I don’t know what you are talking about” events.

Also, something happened, according to tradition, 1981 years ago.

### /r/emacs

#### How to use installed highlight-symbol instead of the built-in one?

Before 24.4, I used this package: https://github.com/nschum/highlight-symbol.el and use "highlight-symbol-at-point" much.

In 24.4, there is a built-in highlight-symbol-at-point from "hi-lock.el.gz". But the built-in one keeps highlights even if already highlighted, so I prefered the installed package.

Question is: how can I use the installed one instead of the built-in one?

FYI, following is my config:

(require 'highlight-symbol) (global-set-key [f2] 'highnlight-symbol-next) (global-set-key [(shift f2)] 'highlight-symbol-prev) (global-set-key [?\s-.] 'highlight-symbol-at-point) (global-set-key (kbd "H-,") 'highlight-symbol-query-replace)

submitted by goofansu

### StackOverflow

#### what's the elegant way to write this code in clojure?

i use clojure and korma libs.

defn db-search-users
[& {:keys [nick_name max_age min_age page page_size lmt oft]
:or {lmt 10  page_size 10 oft 0 }
:as conditons}]
(let [users-sql  (-> (select* users)
(fields :user_name :id :nick_name)
(limit (if (nil? page) lmt page_size))
(offset (if (nil? page) oft (* page page_size))))]
(do
(exec (-> users-sql
need_do_something_here
)
)

)


now i need to add some search conditions to users-sql at "need_do_something_here",i can describe it in imperative style:

if ( nick_name != nil)
users-sql = (where users-sql (like :nick_name nick_name)

if (max_age != nil)
users-sql = (where users-sql (> :birthday blabla....))

if (min_age != nil)
users-sql = (where users-sql (< :birthday blabla....))


how to do this in an elegant way with functional style?

another question is: I think code like:

(if (nil? page) lmt page)


is ugly. is there some functions in clojure like (get_default_value_3_if_a_is_null a 3) ?

### QuantOverflow

#### Multiple day forecasting volatility using GARCH(1,1)

I've been struggling with the volatility forecasting for a while. After digging in the internet, I've came up with a quasi solution. However, the result doesn't make sense to me. I want to forecast multiple days volatility in future. The sigma I got increases overtime for n.ahead=50. I want to see the volatility in 50 days in the future. But it can't be always increasing.

How should I do this correctly? Any tips will be appreciated. Thank you in advance.

library(quantmod)
library(rugarch)

data<-getSymbols("SPY", from="2000-01-01", to="2013-12-31")
dailyreturn<-dailyReturn(SPY$SPY.Adjusted) mydata<-dailyreturn[,1] model<-ugarchspec(variance.model = list(model = "sGARCH", garchOrder = c(1, 1)), mean.model = list(armaOrder = c(0, 0), include.mean = FALSE), distribution.model = "norm") modelfit<-ugarchfit(spec=model,data=mydata) data = mydata[1:3521, ,drop=FALSE] spec = getspec(modelfit) setfixed(spec) <- as.list(coef(modelfit)) forecast = ugarchforecast(spec, n.ahead = 50, n.roll = 3520, data = mydata[1:3521, ,drop=FALSE], out.sample = 3520) sigma(forecast) plot(forecast)  ### CompsciOverflow #### In cache addressing, what value is placed in the offset field? There is a 64 KB 1-word cache, and a word is 32 bits. From that I can derive that the length of the tag field is 16 bits, the length of index field is 14 bits, and, as my professor taught me, there is always 2 bits left behind for a byte offset. Why the offset field is 2 bits, other than it fills in the remaining 2 bits of the word, and what its contents is was never covered in the course. But when I looked around on Google, I read, and correct me if I am wrong, that the length of the offset field can vary. Although I have found answers on how to determine the length, I could not find anything about determining its contents when a read hit/miss is performed. My professor merely said "the byte offset is not used to select the word in the cache". Just looking for clarification. ### /r/clojure #### Building Markov chains in Clojure: Can you pick the fake Java class name? ### StackOverflow #### Extract all tokens from string using regex in Scala I have a string like "httpx://__URL__/__STUFF__?param=value" This sample is a url by convention...it could be anything with zero or more __X__ tokens in it. I want to use a regex to extract a list of all the tokens, so output here would be List("__URL__","__STUFF__"). Remember, I don't know beforehand how many (if any) tokens may be in the input string. I've been struggling but unable to come up with a regex expression that will do the trick. Something like this did not work: (?:.?(__[a-zA-Z0-9]+__).?)+ ### Less Wrong #### New LW Meetup: Christchurch NZ Submitted by FrankAdamek • 1 votes • 0 comments New meetups (or meetups with a hiatus of more than a year) are happening in: Irregularly scheduled Less Wrong meetups are taking place in: The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll also have the benefit of having your meetup mentioned in a weekly overview. These overview posts are moved to the discussion section when the new post goes up. Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday before your meetup! If you missed the deadline and wish to have your meetup featured, you can reach me on gmail at frank dot c dot adamek. If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing list in order to be notified when an irregular meetup is happening: Atlanta, Chicago, Cincinnati, Cleveland, Frankfurt, Helsinki, Marin CA, Ottawa, Pittsburgh, Portland, Southern California (Los Angeles/Orange County area)St. Louis, Vancouver. Whether or not there's currently a meetup in your area, you can sign up to be notified automatically of any future meetups. And if you're not interested in notifications you can still enter your approximate location, which will let meetup-starting heroes know that there's an interested LW population in their city! If your meetup has a mailing list that you'd like mentioned here, or has become regular and isn't listed as such, let me know! Want to help out the common good? If one of the meetups listed as regular has become inactive, let me know so we can present more accurate information to newcomers. 0 comments ### CompsciOverflow #### Computer Architecture: control pins, CE OE Just understanding some syntax. On my Ram (6116) and Rom (27C64) it has a asserted low CE and OE pins. These I believe are control pins. I'm assuming to use the RAM for example, chip enable (ce) has to be low. then Output enable has to be low for data to be sent/read to cpu? If im right so far by looking at the 6116 ram the data bus pins can be either input/output. So CE or OE don't determine whether data is being read or written to that address location. Theres another pin called WE which im assuming is a control pin for data to be input/output. What does WE stand for and am i right with what I've assumed? ### TheoryOverflow #### Upper bound on Euler characteristic of downward closed family (Definition:$\mathcal{F}$is called downward closed if for any$A \in \mathcal{F}$and$B \subseteq A$it holds that$B \in \mathcal{F}.$) Let$\mathcal{F}$be a downward closed family of subsets of$\{1, ..., n\}$generated by$m$sets. Let$\chi(\mathcal{F})$:= number of odd cardinality members of$\mathcal{F}$minus the number of even cardinality members of$\mathcal{F}.$Prove or disprove:$|\chi(\mathcal{F})| \leq m^{O(\log n)}.$As a side question, I am also curious about how easy/hard it is to compute/approximate$|\chi(\mathcal{F})|$given$n$and the$m$generators of$\mathcal{F}.$### /r/compsci #### What are your top 3 challenges working with clients on large system projects? I hope you don't mind a visit from an outsider. I heard this forum was a good way to reach computer scientists and developers. We often work with developers or contract out development work for large system projects (CRM, ERP, WMS, etc.). I would like to improve our interactions with developers, and make things a little bit easier for the ones we work with. Would you mind sharing?: • your top 3 challenges working with your clients, along with a brief description • how you have solved them (if applicable) I know it might not be the same for everyone, but any help you can provide would be sincerely appreciated! submitted by dma38 [link] [1 comment] ### Portland Pattern Repository #### Workplace Democracy (by 146.233.0.202 13 hours ago) #### Left And Right Wing Politics (by adsl-98-95-113-36.jan.bellsouth.net 13 hours ago) ### StackOverflow #### Scalatra could not find or load main class I have hello world scalatra application. I added scalatra-sbt plugin and: val myDistSettings = DistPlugin.distSettings ++ Seq( mainClass in Dist := Some("WebServerLauncher"), memSetting in Dist := "2g", permGenSetting in Dist := "256m", envExports in Dist := Seq("LC_CTYPE=en_US.UTF-8", "LC_ALL=en_US.utf-8"), javaOptions in Dist ++= Seq("-Xss4m", "-Dfile.encoding=UTF-8", "-Dlogback.configurationFile=logback.prod.xml", "-Dorg.scalatra.environment=production") )  After making sbt dist it generates .zip with: #!/bin/env bash export CLASSPATH="lib:lib/logback-core-1.0.6.jar:lib/jetty-webapp-8.1.8.v20121106.jar:lib/jetty-io-8.1.8.v20121106.jar:lib/scalatra-scalate_2.10-2.2.2.jar:lib/jetty-server-8.1.8.v20121106.jar:lib/mime-util-2.1.3.jar:lib/scalatra-common_2.10-2.2.2.jar:lib/scalate-core_2.10-1.6.1.jar:lib/jetty-util-8.1.8.v20121106.jar:lib/jetty-servlet-8.1.8.v20121106.jar:lib/joda-convert-1.2.jar:lib/juniversalchardet-1.0.3.jar:lib/slf4j-api-1.7.5.jar:lib/scala-library-2.10.4.jar:lib/jetty-continuation-8.1.8.v20121106.jar:lib/grizzled-slf4j_2.10-1.0.1.jar:lib/config-1.0.0.jar:lib/javax.servlet-3.0.0.v201112011016.jar:lib/jetty-xml-8.1.8.v20121106.jar:lib/rl_2.10-0.4.4.jar:lib/jetty-security-8.1.8.v20121106.jar:lib/akka-actor_2.10-2.1.2.jar:lib/jetty-http-8.1.8.v20121106.jar:lib/scala-reflect-2.10.0.jar:lib/scalate-util_2.10-1.6.1.jar:lib/logback-classic-1.0.6.jar:lib/scalatra_2.10-2.2.2.jar:lib/joda-time-2.2.jar:lib/scala-compiler-2.10.0.jar:" export JAVA_OPTS="-Xms2g -Xmx2g -XX:PermSize=256m -XX:MaxPermSize=256m -Xss4m -Dfile.encoding=UTF-8 -Dlogback.configurationFile=logback.prod.xml -Dorg.scalatra.environment=production" export LC_CTYPE=en_US.UTF-8 export LC_ALL=en_US.utf-8 java$JAVA_OPTS -cp $CLASSPATH WebServerLauncher  When i'm trying to run it i got: Error: Could not find or load main class WebServerLauncher  There is WebServerLauncher.class in lib directory. How to correctly launch it? Thank you. #### "dist" command gets "not a valid command" error I have a working Play Framework 2.1 application generated with typesafe activator that I've developed in Scala. I'm trying to deploy it in CloudBees using the instructions that can be found here: http://wiki.cloudbees.com/bin/view/RUN/Playframework#HDeployingaPlay2application using the method described under "Using Cloudbees SDK." However, when I load up the play console and try to run the "dist" command, I get the error "Not a valid command: dist." I've tried two run this three different ways: 1. In the terminal window (I'm using Mac OS X), I navigated to the project directory, ran the "activator" application (there is no application in that directory called "play", but "activator" seems to be the), then from the prompt that appears I enter the command "dist." 2. I downloaded the regular (non-activator) Play Framework distirbution file, add the directory to my path using "export PATH=$PATH:/Applications/play-2.2.2", navigated to the project directory, and ran the command "play dist."
3. Installed play using Homebrew. Navigated to the project directory and ran "play dist".

All three methods give me the same error (see below). Is the method different for my version of play? Am I missing something from the sbt file? How can I get this working?

Full output for "play dist":

Macmini-##########-#:nimrandslibrary.searchfu.esl kpyancey$play dist [info] Loading project definition from /Users/kpyancey/Projects/NimrandsLibrary.SearchFu.Esl/project [info] Set current project to NimrandsLibrary.SearchFu.Esl (in build file:/Users/kpyancey/Projects/NimrandsLibrary.SearchFu.Esl/) [error] Not a valid command: dist (similar: set, iflast, last) [error] Not a valid project ID: dist [error] Expected ':' (if selecting a configuration) [error] Not a valid key: dist (similar: test, ivy-sbt, history) [error] dist [error] ^  ### CompsciOverflow #### Expected maximum bin load, for balls in bins with equal number of balls and bins Suppose we have$n$balls and$n$bins. We put the balls into the bins randomly. If we count the maximum number of balls in any bin, the expected value of this is$\Theta(\ln n/\ln\ln n)$. How can we derive this fact? Are Chernoff bounds helpful? ### Planet FreeBSD #### Weekly Feature Digest 26 — The Lumina Project and preload This week the PC-BSD team has ported over preload, which is an adaptive readahead daemon. It monitors applications that users run, and by analyzing this data, predicts what applications users might run, and fetches those applications and their dependencies to speed up program load times. You can look for preload in the next few days in edge packages and grab it for testing on your own system. There is an early alpha version of the Lumina desktop environment that has been committed to ports / packages. Lumina is a lightweight, stable, fast-running desktop environment that has been developed by Ken Moore specifically for PC-BSD. Currently it builds and runs, but lacks many other features as it is still in very early development. Grab it from the edge packageset and let us know what you think, and how we can also improve it to better suit you as a user! Other updates this week: * Fixed some bugs in ZFS replication causing snapshot operations to take far longer than necessary * Fixed an issue with dconf creating files with incorrect permissions causing browsers to fail * Added Lumina desktop ports / packages to our build system * PC-BSD Hindi translation 100% complete * improvements to the update center app * Update PCDM so that it will use “pw” to create a user’s home directory if it is missing but the login credentials were valid. This should solve one of the last reported issues with PCDM and Active Directory users. * Bugfix for pc-mounttray so that it properly ignores the active FreeBSD swap partition as well. * Another small batch of 10.x PBI updates/approvals. ### Fefe #### China hat eine peinliche Statistik veröffentlicht: ... China hat eine peinliche Statistik veröffentlicht: 16,1% der Erde sind verschmutzt, und sogar 19,4% der Anbaufläche. Natürlich ist davon nicht alles gleich übel verschmutzt, aber angesichts der Größe Chinas ist das eine ziemlich erschütternde Statistik. ### HN Daily #### Daily Hacker News for 2014-04-18 The 10 highest-rated articles on Hacker News on April 18, 2014 which have not appeared on any previous Hacker News Daily are: ## April 18, 2014 ### /r/emacs #### How to run a shell command from a specific directory in emacs? ### StackOverflow #### What are the benefits / drawbacks of functional object creation in JavaScript? I just watched Douglas Crockford talk about how prototypical inheritance is "not a good idea either" YouTube 35m55s I don't really care about his views on Prototypical inheritance in conjunction with JavaScript since it is such an essential part of the language that it will always be there. But I would like to know what benefits I am reaping by using the functional object creation that he is showing in the link: // Class Free Object Oriented Programming function constructior(init) { var that = other_constructor(init), member, method = function () { // init, member, method }; that.method = method; return that; }  After the video I re-read the part about Functional Object Creation in his book "JavaScript The Good Parts" Chapter 5: Inheritance. But I can't really see the big difference.. I can get private members just fine with the constructor pattern: function Constructor (value) { var private = value; this.getPrivate = function () { return private; } } var OBJ1 = new Constructor(5); var OBJ2 = new Constructor('bacon'); console.log( OBJ1.getPrivate() ); // 5 console.log( OBJ2.getPrivate() ); // bacon  The only difference I can spot between a Constructor Pattern and the Functional Pattern is the omission of the new keyword. By avoiding the use of the new keyword we can avoid the error of forgetting the new keyword. Writing this: var panda = createBear();  Instead of this: var panda = new Bear();  Makes me think it is mainly down to personal preference. I can see how avoiding the new keyword can be useful, and I might adopt it the functional pattern. But this is the only reason I can see as to why you would do it. Can I please get some more information why one would be better or worse than the other? #### What is zip (functional programming?) I recently saw some Clojure or Scala (sorry I'm not familiar with them) and they did zip on a list or something like that. What is zip and where did it come from ? ### /r/netsec #### Revisiting Mac OS X Kernel Rootkits submitted by _rs [link] [comment] ### StackOverflow #### How to use java project in eclipse from clojure project I have an existing Java code base. It is organized into several projects in eclipse. These projects tend to require one another. For example:  Project A -> Common Lib 1 -> 2nd level dependency 1 | -> Common Lib 2  To utilize code from other projects I can go to "Build Path" "Projects" tab and click "Add" Is there something similar that can be done for clojure code (in eclipse), so that I can easily start using code from my existing Java projects in clojure? #### Fluentd to Logstach output plugin I am trying to read from the scribe server using flunetd and output those logs to be stored in logstash for now. I know it's very stupid to log the scribe_central logs to another central logger, but we need this to be done in our current architecture. Does anyone know if there is any plugin to do that? I searched Google but could not find any. ### /r/scala #### [Hiring] Become buzzword-compliant! Work on Big Data Analytics™ and Anomaly Detection at Metafor Software! (Vancouver, BC) ### /r/compsci #### Can someone ELI5 the closest pair - sieve method? Can someone explain to me how this works? These are my teacher's notes on the topic, but I just can't seem to make sense of it: http://www.cs.arizona.edu/classes/cs445/spring14/closestpair.pdf So instead, I searched for the paper mention on the second slide and found: https://www.cs.umd.edu/~samir/grant/cp.pdf But I'm still having a hard time wrapping my head around what my teacher's talking about. submitted by fizzix_ [link] [comment] ### Halfbakery #### Stock market game (0.0) ### Portland Pattern Repository #### Microsoft Quality (by NickBensema 16 hours ago) #### Alan Kays Definition Of Object Oriented (by 162.211.71.215 16 hours ago) ### Planet Theory #### TR14-057 | Measure of Non-pseudorandomness and Deterministic Extraction of Pseudorandomness | Diptarka Chakraborty, Manindra Agrawal, Debarati Das, Satyadev Nandakumar In this paper, we propose a quantification of distributions on a set of strings, in terms of how close to pseudorandom the distribution is. The quantification is an adaptation of the theory of dimension of sets of infinite sequences first introduced by Lutz \cite{Lutz:DISS}. We show that this definition is robust, by considering an alternate, equivalent quantification. It is known that pseudorandomness can be characterized in terms of predictors \cite{Yao82a}. Adapting Hitchcock \cite{Hitchcock:FDLLU}, we show that the log-loss function incurred by a predictor on a distribution is quantitatively equivalent to the notion of dimension we define. We show that every distribution on a set of strings of length$n$has a dimension$s\in[0,1]$, and for every$s \in [0,1]$there is a distribution with dimension$s$. We study some natural properties of our notion of dimension. Further, we propose an application of our quantification to the following problem. If we know that the dimension of a distribution on the set of$n$-length strings is$s \in [0,1]$, can we deterministically extract out$sn$\emph{pseudorandom} bits out of the distribution? We show that this is possible in a special case - a notion analogous to the bit-fixing sources introduced by Chor \emph{et. al.} \cite{CGHFRS85}, which we term a \emph{nonpseudorandom bit-fixing source}. We adapt the techniques of Kamp and Zuckerman \cite{KZ03} and Gabizon, Raz and Shaltiel \cite{GRS05} to establish that in the case of a non-pseudorandom bit-fixing source, we can deterministically extract the pseudorandom part of the source. Further, we show that the existence of optimal nonpseudorandom generator is enough to show${\P}={\BPP}$. #### TR14-056 | Factors of Sparse Polynomials are Sparse | Rafael Mendes de Oliveira, Zeev Dvir We show that if$f(x_1,\ldots,x_n)$is a polynomial with$s$monomials and$g(x_1,\ldots,x_n)$divides$f$then$g$has at most$\max(s^{O(\log s \log\log s)},d^{O(\log d)})$monomials, where$d$is a bound on the individual degrees of$f$. This answers a question of von zur Gathen and Kaltofen (JCSS 1985) who asked whether a quasi-polynomial bound holds in this case. Two immediate applications are a randomized quasi-polynomial time factoring algorithm for sparse polynomials and a deterministic quasi-polynomial time algorithm for sparse divisibility. ### QuantOverflow #### selecting test data for neural networks I have been working on a neural network based on certain technical indicators. As people familiar with neural networks would know after developing a hypothesis, the developer is also supposed to provide a set of data to learn from. Now if were a case of developing neural networks for spam fitering I would provided it with sets of spam and non spam data. But in my case how do I select the buy/sell point...do I just randomly select the entry points where can visually see the movement in price that I desire or is there a better approach? #### PCA related Query I am currently working on a project in grad school where I am using PCA Approach. I have 4 stocks. I used R to generate Eigen Values, Eigen vectors Eigen Values Number Value Diff Proportion 1.00 3.51300808 3.18720008 0.8782 2.00 0.325808 0.17528152 0.08145 3.00 0.15052648 0.13986904 0.03763 4.00 0.01065744 0.00266 Eigen Vector pc(1) pc(2) pc(3) pc(4) Stock 1 0.516215 0.4136083 -0.1234068 -0.7397439 Stock 2 0.5131561 0.31805179 -0.5016014 0.6196046 Stock 3 0.5048276 0.03720224 0.8298851 0.2346397 Stock 4 0.4640495 -0.85228354 -0.2108496 -0.1175298 How to arrive at the Eigen portfolio's. In my paper it says I can arrive at the weights by dividing the eigen vector by the Standard deviation of stocks & then multiply them by Stock returns. For ex, The stdev of stock 1 daily data is 0.0375. When I divide Factor loadings of Stock1, PC(1) I get a 98 something, which cannot be the weight of that stock in the eigen portfolio. Also Can you pls tell me how to interpret the eigen portfolio if i use PC(1) loadings & PC(2) loadings. I am really confused and stuck up. Luckily I came across this site. Any help is appreciated. ### Planet Theory #### danah boyd, Randall Munro, and netizens. danah boyd, author of 'It's Complicated' just gave a tech talk at Google. Her book has been in the news a lot lately, so I'll skip the details (although Facebook ought to be at least slightly worried). But what I enjoyed the most about her talk was the feeling that I was listening to a true netizen: someone who lives and breathes on the internet, understands (and has helped build) modern technology extremely well (she is a computer scientist as well as an ethnographer), and is able to deliver a subtle and nuanced perspective on the role and use of technology amidst all the technobabble (I'm looking at you, BIG data) that inundates us. And she delivers a message that's original and "nontrivial". Both about how teens use and interact with social media, and about how we as a society process technological trends and put them in context of our lives. Her discussion of context collapse was enlightening: apart from explaining why weddings are such fraught experiences (better with alcohol!) it helped me understand incidences of cognitive frisson in my own interactions. What she shares with Randall Munro in my mind is the ability to speak unselfconsciously and natively in a way that rings true for those of us who inhabit the world of tech, and yet articulate things that we might have felt, but are unable to put into words ourselves. Of course they're wildly different in so many other ways, but in this respect they are like ambassadors of the new world we live in. ### StackOverflow #### "Or"-ing two Options in Scala? I want to do something like this: def or[A](x: Option[A], y: Option[A]) = x match { case None => y case _ => x }  What is the idiomatic way to do this? The best I can come up with was Seq(x, y).flatten.headOption ### TheoryOverflow #### Small-step semantics: for-loop I'm trying to construct the small-step semantic rules involving the for-loops, but I can't find anything about it in the literature (only about while-loops). I was wondering if anyone could help me out with this?$\quad \displaystyle\sigma, \text{for } s_1 \, e_1 \, e_2 \, s_2 \, \rightarrow \, \sigma, \text{if } e_1 \text{ then (} s_2 ; \, e_2; \, \text{for } s_1 \, e_1 \, e_2 \, s_2 \text{ ) else } skip$Where$\sigma$is a local value store,$s_1$is for example$i = 0$,$e_1$could equal$i < 4$and$e_2i=i+1$. ### Planet Clojure #### Learn Clojure… Learn Clojure – Clojure Koans Walkthrough in Light Table IDE You have heard of Clojure and no doubt the Clojure Koans. Now there are videos solving the Clojure Koans using the Light Table IDE. I first saw this at Clojure Koans by Christopher Bare. ### QuantOverflow #### FX Rate dynamics Let's suppose USD/EUR price in USD follows a GBM with $$dS_t = rS_tdt + \sigma S_tdW_t$$ What process does EUR/USD follow in EUR? #### Risk neutral measure for jump processes How can I construct risk neutral measure for option price if active price form is: $$S(t)=S(0)\left[\exp{σW(t)+(α-βλ-1/2σ^2)t+Q(t)}\right] ?$$ Here$W(t)$is a Brownian motion and$Q(t)$is a compound Poisson process. Thank you beforehand. ### /r/freebsd #### FreeBSD on an SSHD drive Dear /r/freebsd, I'm preparing to buy a new laptop in the near future (probably a Latitude from Dell). I've read different opinions about these SSHD drives - some say they could mean trouble, some say they just work. Does anybody here happen to use such storage device? Should I choose it or stick with something traditional (HDD)? submitted by pfm [link] [8 comments] ### StackOverflow #### Scala: "number" interpolation Scala has string interpolation like raw"\n" for raw strings. Does it have anything like number interpolation e.g. 1px for one pixel? A nice syntax for numeric units would both make code more readable and make it easier to write safer code. Like strings, numbers have a nice literal syntax and are fundamental. prefix notation px(1) is not how people write units: case class px(n: Int)  And I don't think a postfix notation via implicit conversion can work: case class Pixels(n: Int) { def px() = Pixels(n) def +(p: Pixels) = p match {case Pixels(m) => Pixels(n+m)} } implicit def Int2Pixels(n: Int) = Pixels(n)  1. it needs a dot or space or parens (i.e. not (1 px) or (1)px or 1.px, which is not how humans write units). 2. it won't check types i.e. we want to explicitly cast between these numeric type-alias things and numbers themselves (i.e. 1.px + 2 and 1 + 2.px and def inc(p: Pixels) = p + Pixels(1) with inc(0) all don't fail, because of the implicit cast, when they should). ### UnixOverflow #### Install r5u87x on FreeBSD I need to install Ricoh r5u87x webcam loader on FreeBSD, because it solve not suspending in unix/linux boxes on my vaio laptop as I try it in UBUNTU, but the loader is simply for Linux, I think. Is there any solution for FreeBSD? Thanks. ### StackOverflow #### FileNotFoundException Could not locate clojure/java/jdbc__init.class I have a problem with importing jars in clojure. I used lein to add dependencies. This is code from project.clj (defproject recommendation "0.1.0-SNAPSHOT" :description "FIXME: write description" :url "http://example.com/FIXME" :license {:name "Eclipse Public License" :url "http://www.eclipse.org/legal/epl-v10.html"} :dependencies [[org.clojure/clojure "1.5.1"] [org.clojure/java.jdbc "0.0.6"] ;; jdbc [mysql/mysql-connector-java "5.1.6"]] :aot :all :main recommendation.core)  I typed in cmd lein deps, and it has downloaded for me 3 jars in lib folder. This is code from recommendation.core (ns recommendation.core (:require [clojure.java.jdbc :as sql]) ) And I get exception: FileNotFoundException Could not locate clojure/java/jdbc__init.class or clojure/java/jdbc.clj on classpath: clojure.lang.RT.load (RT.java:443)  Can anybody tell me where i am wrong and what to do? EDIT: I solved the problem by restarting a REPL. There was a problem with :aot :alltoo, i couldnt restart aplication, eclipse was at not responding mode when i run a repl again. Thanks anyway. ### QuantOverflow #### (College Student) Were can I find Historical Interest Rate Data? Where can I find American historical Savings Account interest (Bank) rates? If you can, please attach corresponding links. ### CompsciOverflow #### A variant of Travelling Salesman: Is it NP-complete if its sub-problems are NP-complete? [on hold] Suppose there is a travelling salesman who wants to travel through N cities in k countries(k <= N). For convenience, he will travel all the cities within a certain country and then move to another. In each country, he wants to find out the shortest simple path to traverse through all the cities.[Classic TSP problem] Q1. Suppose moving to another country costs nothing. Is that still a NP-complete problem? Any suggestion for a proof? Q2. (There could be connections with certain costs between two cities in two different countries.) Suppose the salesman has to visit the countries (but not cities) in a certain order. Is that still a NP-complete problem? Any suggestion for a proof? (Precisely, I should have used "NP-hard problem". I hope you can translate it into the decision version.) Q1: It is intuitive that a general TSP instance can be reduced to this problem for k = 1. What I know about is that you can reduce a general instance of a known NP-c problem into a "special" instance of the problem that you wish to reduce to. However, I think "k=1 (with N)" is a very special case that it is much harder than instances when k > 1. That's why I am not quite certain if this intuitive approach is correct. "This is a dump of an exercise problem, not a question." I feel sorry for my algorithm teacher. I made this exercise up for a similar situation in my research (not in the area of algorithm for sure). If I let k = 1, the instance got reduced to seems to be the hardest instance which I doubt if it is general. Since the sub-problems are NP-c (in decision version), let the cost for jth country with m cities be$T_j(m)$, then$\Sigma_k T_j(m)$is much smaller than$T(N)$. Also, the problem is not that hard when k = N. (k is part of the input?) "Is it NP-complete if its sub-problems are NP-complete?" (So I made up the problem.) If so, that intuitive approach seems correct. Q2: I think it is kind of related to Q1. I didn't post it because I tend to solve it on my own.(Hints are welcome.) If you solve it, I may have to consider putting you as a coauthor later :). Thanks #### At what n, does a n^2xn^2 sudoku puzzle take too long to solve? [on hold] I'm creating a sudoku solver, and I'm wondering at what point, with a simple backtracking sudoku solving algorithm, does the result take way too long to compute? I'm thinking like more than 30 minutes? I'll probably try to implement some heuristics, but I want to know at what point should I not expect a solution within 30 minutes? ### /r/emacs #### Why is my font taller in emacs? Same font/size. ### CompsciOverflow #### Efficient lookup when key is made of multiple elements and elements can be empty I am wanting to create a map where the key contains multiple elements and the elements can be empty/null. The empty values are treated as "anything". I want to lookup function to match when the stored key is the lookup value or it is a generalised version - index key has empties where lookup value has values. I think the formalisation would be "the lookup value logically subsumes the index-key". I also want the lookup function to return the most specific index-key, that is the key with the fewest empties. For example, if the data is stored in a (<key>, <value>) tuple with the key being a tuple of the elements and ? representing the empty set/null value: ((1, ?, 6, 3), "hey") ((1, 5, 6, 3), "hi") ((2, ?, ?, ?), "hello")  So lookup((2, 4, 5, 6)) -> "hello". And lookup((1, 5, 6, 3)) -> "hi" because (1, 5, 6, 3) is more specific than (1, ?, 6, 3). A simple solution is to store them as shown above and simply look through them. This would take$O(nm)$where$n$is the number of entries and$m$is the number of elements in the key. Checking in most-to-least specific would mean a match could be returned immediately. This there an approach that could improve this? Thank you ### /r/netsec #### Codeigniter Object Injection Vulnerability via Encryption Key ### /r/emacs #### Good text-to-image tools like ditaa or plantuml that would be good for geometry and algebra? I've been using org-mode more and more, especially as I become more proficient with elisp and calc. I played a bit with ditaa, but what I'm really looking for is a way to create images like what you'd see in a high school algebra or geometry textbook. Plane figures, polygons, circles, ellipses, graphs in the cartesian and/or complex plane, etc. I'd prefer something that integrates fairly easily with Emacs org-mode, that I could put to use in my day job as a high school math teacher. I've got a lot of FOSS tools at hand now: Gimp; Inkscape; LibreOffice Math; etc. Any suggestions? submitted by LordAgni [link] [1 comment] ### CompsciOverflow #### A procedure for Topological sort, proof for its correctness Definition: A preserved invariant of a state machine is a predicate,$P$, on states, such that whenever$P(q)$is true of a state,$q$, and$q \rightarrow r$for some state,$r$, then$P(r)$holds. Definition: A line graph is a graph whose edges are all on one path. Definition: Formally, a state machine is nothing more than a binary relation on a set, except that the elements of the set are called “states,” the relation is called the transition relation, and an arrow in the graph of the transition relation is called a transition. A transition from state$q$to state$r$will be written$q \rightarrow r$. DAG: Directed Acylic Graph The following procedure can be applied to any directed graph,$G$: 1. Delete an edge that is in a cycle. 2. Delete edge$<u \rightarrow v>$if there is a path from vertex$u$to vertex$v$that does not include$<u \rightarrow v>$. 3. Add edge$<u \rightarrow v>$if there is no path in either direction between vertex$u$and vertex$v$. Repeat these operations until none of them are applicable. This procedure can be modeled as a state machine. The start state is$G$, and the states are all possible digraphs with the same vertices as$G$. (b) Prove that if the procedure terminates with a digraph,$H$, then$H$is a line graph with the same vertices as$G$. Hint: Show that if$H$is not a line graph, then some operation must be applicable. (c) Prove that being a DAG is a preserved invariant of the procedure. (d) Prove that if$G$is a DAG and the procedure terminates, then the walk relation of the final line graph is a topological sort of$G$. Hint: Verify that the predicate$P(u,v)$:: there is a directed path from$u$to$v$is a preserved invariant of the procedure, for any two vertices$u, \ v$of a DAG. (e) Prove that if$G$is finite, then the procedure terminates. Hint: Let$s$be the number of cycles,$e$be the number of edges, and$p$be the number of pairs of vertices with a directed path (in either direction) between them. Note that$p \leq n^2$where$n$is the number of vertices of$G$. Find coefficients$a,b,c$such that as+bp+e+c is nonnegative integer valued and decreases at each transition. My Problems: I got stuck with problems$d$and$e$but solutions to other problems are welcome too. At problem$d$, I could not understand the hint and why it is given, how it helps. In my way for proving$d$, I am trying to show that given procedure always preserves the order of vertices, which are associated with edges, on the start graph$G$. So a line graph is automatically a topological sort since the "precedence order" of the vertices are preserved. But procedure number$3$is problematic, how to show it preserves precedence ? ### StackOverflow #### In (reduce f val coll), is the val an accumulator? When you call reduce and pass it a function and two arguments, can the first argument be considered to be an accumulator? Is it always an accumulator? Is it sometimes an accumulator? I was reading a blog entry about using Clojure to parse big files and found this line: (reduce line-func line-acc (line-seq rdr))  Link to the blog entry: http://lethain.com/reading-file-in-clojure/ What about a simple: (reduce + [1 2 3])? Is there an accumulator involved? I take it my question boils do to: "What is exactly an accumulator?" But I'd still like to understand too the relation between an accumulator and the reduce function. So any answer to these specific (related) questions are most welcome! ### TheoryOverflow #### Most efficient algorithm to search an unsorted array with a very precise data structure (I apologize in advance if this question sounds a bit practical, but I suspect it might have an interesting theoretical aspect.) I have a (large) array of data, not completely sorted, but with which has a very precise structure, defined as follows. The array has a length that is a (very large) power of 4. The data is such that: • if the array is split in 4 parts - all of the same number of elements - then in each part the first element is the Minimum of this part, and the last element is the Maximum of this part (minimum < maximum). • if we take any one of these parts and, within it, we repeat the subdivision in 4 equal parts, the above fact holds always again for each of the new parts, all the way down, until we arrive at the smallest part, of size 4 elements. (in other words a sort of "fractal" arrangement, we might perhaps say (?)). Question: I need to search this array for a given specific value. • What would be the most efficient algorithm to perform the search, given the above structure • And what is the best complexity I should expect for this task (I would also like to know if this sort of problem is well known and has a name and there are additional pointers I can read read. Thank you) ### /r/compsci #### I wanna build a peer-to-peer chat system. Where do I begin? Hello /r/compsci I am a CS student. Summer is here and I want to do something during it. I wanted to make something to replace my reliance on Facebook for keeping in touch with friends. So I decided to build a peer-to-peer chat system. I have some programming experience but nothing when it comes to building a p2p chat application. I know about the way the internet works (OSI7, TCP/IP, UDP etc.) but I have never programmed in these environments before. I was researching p2p chat systems and the best thing I could find that resembles what I want to make is WASTE. Something like this would be ideal for I need. Would anyone be kind enough to tell me what to learn in order to make this into reality? PS: I want to build this from the ground up. This would be a good opportunity to learn new things and get some experience. submitted by Droidx4_66 [link] [17 comments] ### QuantOverflow #### Why are short expiries associated with more pronounced volatility skews? I've noticed that for a given strike price, the shorter expiration dates of options have more pronounced volatilities why is that? ### Lobsters #### Big-O Algorithm Complexity Cheat Sheet ### StackOverflow #### Run sequential process with scala future I have two external processes to be run sequentially:  val antProc = Process(Seq(antBatch","everythingNoJunit"), new File(scriptDir)) val bossProc = Process(Seq(bossBatch,"-DcreateConnectionPools=true")) val f: Future[Process] = Future { println("Run ant...") antProc.run } f onSuccess { case proc => { println("Run boss...") bossProc.run } }  The result is:  Run ant... Process finished with exit code 0  How do I run antProc until completion, then bossProc? The following method seems to achieve the purpose. However, it's not a Future approach.  antProc.!< bossProc.!<  ### CompsciOverflow #### Time complexity of base conversion EDIT As requested, a single question Why can't arbitrary base conversion be done as fast as converting from base$b$to base$b^k$? There is a big time complexity difference, so I am also interested in further reading material about it. Old. Original question Conversion between power-2-radix can be done faster than between non-power-of-2 radix, they can be even done in parallel, as every digit (or some groups of them) can be decoded independently of the rest. For example the binary number 00101001 can be converted to hexadecimal 0x29 nibble by nibble (0010 and 1001), and vice versa (i.e. every hex-digit can be parsed to 4 bits independently of the rest), but doing that conversion from to decimal (or any other non-power-of-2 radix) it's not so easy because digits affects each other. I've seen time complexity of math operations in wikipedia, and there is also a related question in stackoverflow saying time complexity of conversions of arbitrary digit length to be$\mathcal{O}(M(n) log(n))$I'm not interested in a "general time complexity bounds for any base conversion" but I would like to know more about the big differences in time complexity between power-of-2 conversions vs any other base conversions. It's could be a general fact about conversions that can be done faster if they are done between numbers where its bases are power among themselves, not only for 2, but the same to a base 10 to base 100. Is there any known proof or materials around this ? ### TheoryOverflow #### Is the complement of a NP-class problem in NP? [on hold] NP is defined as follows: NP is the set of all decision problems for which the instances where the answer is "yes" have efficiently verifiable proofs of the fact that the answer is indeed "yes". So, if we look at complement of a problem in NP, all NO instances for this problem are YES instances for the first one and vice versa. So, we have advice for YES inputs in the first problem which can be verified efficiently. In the second version of the problem we have advices for the NO instances which are verifiable in polynomial time. But the definition only considers the YES instances which means the second version won't belong to NP-class. But the two problems are essentially the same. Basically, I am confused why nothing is said about the NO inputs in the definition. Does it mean that if for a decision problem, every NO instance also can be verified in polynomial time (YES instances can also be verified efficiently), then the problem is in NP? Please help me understand. ### /r/emacs #### Is it possible to make tramp not block *all of emacs* while waiting for remote servers? Using tramp to access shells and files on remote computers is awesome, except for one thing -- when the remote server is slow to respond (or worse went down for some reason) tramp blocks all of emacs, which is agonizing. Does anyone know if there's a way to stop this from happening? Or more precisely how much work would be need to fix this? submitted by dilap [link] [12 comments] ### Fefe #### Der "Dr. Mengele" der CIA, der Arzt hinter dem Folterprogramm, ... Der "Dr. Mengele" der CIA, der Arzt hinter dem Folterprogramm, verteidigt sich im Guardian. Er sei ja bloß ein Mann, der von seiner Regierung gebeten worden sei, etwas für sein Land zu tun. Und im Übrigen habe die Folter doch funktioniert, findet er. Er glaubt, der Senatsreport schreibe nur deshalb, dass Folter nicht wirkt, weil sie Angst gehabt hätten, dass dann auch andere Länder Folter gegen Amerikaner anwenden würden. Aber das tun sie ja längst, glaubt er. Dass so jemand seinen Arbeitgeber George W Bush verteidigen würde, überrascht nicht. Aber das hier möglicherweise: He also criticized Obama's healthcare policy – a "shit sandwich" – and his administration's approach to global warming. Mitchell believes it's a myth. Na da passt ja mal wieder alles zusammen. Einen richtigen Wissenschaftler haben sie nicht gefunden, der ihre Drecksarbeit machen würde, ja? Da haben sie halt einen Crackpot aus Florida genommen? Krass. Der Typ ist übrigens schon 2004 im Report des Inspector General der CIA aufgetaucht. It said Mitchell and Jessen had "probably misrepresented" their "expertise" as experienced interrogators when pitching coercive techniques to the CIA as a way to obtain actionable intelligence from prisoners. Nicht die CIA ist auf den zugegangen, der ist auf die CIA zugegangen! Ach du meine Güte! ### StackOverflow #### A regex style matching library for generic matching on list items I've seen a library like this around before but then forgotten about what it was called. You can specify a pattern that matches elements in a list, something along the lines of: (def oddsandevens (pattern (n-of odd? :*) (n-of even? 2) :$))

(pattern-match oddsandevens [1 1 2 2]) => true
(pattern-match oddsandevens [1 1 1 2 2]) => true

(pattern-match oddsandevens [1 1 2 2 2]) => false
(pattern-match oddsandevens [1 1 2]) => false


If I'm totally imagining this, can someone shed light on how one might write one of these things?

### CompsciOverflow

#### Count number of ways to place ones in an $M \times M$ matrix so that every row and column has $k$ ones?

On math.stackexchange, someone asked how to count the number of ways to place $1$'s into a $10 \times 10$ matrix so that every row and column has $5$ $1$'s. Each element of the matrix must be either zero or one.

I came up with a recursive solution for an $N \times 10$ matrix. Subproblems are indexed by the counts $c_k$ of how many columns have $k$ $1$'s, for $k =0, 1,2,3,4,5$. The counts $c_k$ have to satisfy $\sum_k c_k = 10$, and they also have to satisfy $\sum_k kc_k = 5N$ and $c_k = 0$ for $k > N$. The complexity of this algorithm basically boils down to how many distinct sets of valid indices $(c_k)_k$ there are.

For a $10 \times 10$ matrix I think this approach should work out nicely, but I worry the complexity might get prohibitively large if we wanted to count how many ways to get $M/2$ $1$'s in every row and column of an $M \times M$ matrix. So I'm wondering, is there a more efficient way to solve this counting problem? In other words, a better way than solving for $N \times M$ in increasing order of $N$ and keeping track of subcases indexed by $(c_k)_k$ such that $\sum_k c_k = M$ and $\sum_k k c_k = NM/2$? Also, for my solution, can anybody work out a good bound for how many sub-cases I have as a function of $M$?

### TheoryOverflow

#### Does Kannan's theorem imply that NEXPTIME^NP ⊄ P/poly?

I was reading a paper of Buhrman and Homer “Superpolynomial Circuits, Almost Sparse Oracles and the Exponential Hierarchy”.

On the bottom of page 2 they remark that the results of Kannan imply that $NEXPTIME^{NP}$ does not have polynomial size circuits. I know that in the exponential time hierarchy, $NEXPTIME^{NP}$ is just $\Sigma_2EXP$, and I also know that Kannan's result is that $\forall c\mbox{ }\exists L\in\Sigma_2P$ such that $L \not\in Size(n^c)$. Of course, Kannan's theorem is NOT saying $\Sigma_2P \not\subset P/poly$ (in order for that to be the case we would need to show that $\exists L\in\Sigma_2P$ such that $\forall c$, $L \not\in Size(n^c)$. However, I don't see how Kannan's result implies that $NEXPTIME^{NP} \not\subset P/poly$?

### QuantOverflow

#### How to perform Empirical Mode Decomposition?

I am trying to use the EMD applied to EURUSD open price to train a machine learning algo (RVM).

I have run only once the EMD on my training set and once on the training+test set.

The results on the test sets only are quite good. However when I apply the algo on the last sample only the predictions are bad.

Shall I run the EMD on each sample of my training set using a sliding window ?

I understand EMD is non-causal, but can it be used in some ways for training a machine learning algo ?

### StackOverflow

#### Scala Override Return Type

I have a task in which I need to override the return type of a method. The problem is the method is called yet the return type is not overridden. Please help!

abstract class Parser
{
type T1 <: Any;
def test1(): T1;

type T2 <: String;
def test2(): T2;
}
//class path: parser.ParserA
class ParserA extends Parser
{
type T1 = String;
override def test1(): T1=
{
return "a,b,c";
}
type T2 = String;
override def test2(): T2=
{
return "a,b,c";
}
}

//some where in the project
val pa = Class.forName("parser.ParserA").newInstance().asInstanceOf[Parser];
println(pa.test1().length());// error: value length is not a member of pa.TYPE_4
println(pa.test2().length());// this works, print 5;


### StackOverflow

#### SBT create sub-projects using a collection

I've been searching if this is possible for a while with little success.

Using SBT, can you create a sub-project programmatically, without explicitly assigning each project to it's own val?

My current project structure looks something like this:

root/
common/ <--- This is another sub-project that others dependOn
project/
build.scala
src/main/scala
apps/ <--- sub-projects live here
Sub1/
Sub2/


Sub1 and Sub2 are both their own SBT projects.

My first attempt to link these projects together looked like this:

// root/project/build.scala
import sbt._
import Keys._
object build extends Build {
lazy val common = project /* Pseudo-code */
val names = List("Sub1", "Sub2")
lazy val deps = names map { name =>
Project(id = name, base = file(s"apps/$name")).dependsOn(common) } lazy val finalDeps = common :: deps lazy val root = project.in(file(".")).aggregate(finalDeps.map(sbt.Project.projectToRef) :_*) .dependsOn(finalDeps.map(ClassPathDependency(_, None)) :_*) }  However, because SBT uses reflection to build it's projects and sub-projects, this doesn't work. It only works if each sub-project is stated explicitly: lazy val Sub1 = project.in(file("apps/Sub1"))  So the question: Is there a way to programmatically build sub-project dependencies in SBT? ### DataTau #### How Americans Die ### StackOverflow #### Extend Scala Set with concrete type Really struggling to figure out extending the immutable Set with a class that will represent a Set of concrete type. I'm doing this to try and create a nice DSL. I'd like to have a class Thing, and when you add 'things' together you get a ThingSet object, which extends Set. class Thing(val name:String){ def +(other: Thing):ThingSet = new ThingSet() + other }  I just can't figure out how to make the ThingSet object. I know I need to mix in traits like GenericSetTemplate, SetLike etc. But I just can't make it work. Please, can anybody give me some pointers, as I can't find anything explicit enough to learn from. I've tried looking at the BitSet and HashSet implementations, but get lost. ### /r/compsci #### Digital systems: Bidirectional shift register? Can anyone walk me through the design of a bidirectional shift register with a TTL 74175? submitted by Wrong_Burgundy [link] [comment] ### StackOverflow #### Convert a document with Play Json I have a list of courses from coursera api: {"elements":[ {"id":69, "shortName":"contraception", "name":"Contraception: Choices, Culture and Consequences", "links":{} }, ... ] }  I want to convert it to a document that look like so ( i use <--- arrrows as comments): {"Courses":[ { "Type" : "Course", "Title" : "contraception" <---- short name "Content" : {"id":69, <---- the original course "shortName":"contraception", "name":"Contraception: Choices, Culture and Consequences", "links":{} } }, ... ]}  Is it possible to perform this with json only api from play? Here is how I do it presently (with conversion to scala lists). val courses = (response.json \ "elements") .as[List[JsValue]] .map { course => // This is how we want our document to look Json.obj( "Type" -> "Course", "Provider" -> "Coursera", "Title" -> (course \ "name"), "Content" -> course ) } // then put this into the final json object with "Courses" ...  ### CompsciOverflow #### How to prove that any minimum vertex cover of a clique of size$n$must have exactly$n-1$vertices? [on hold] I would appreciate any help with this question... I don't know how to prove Np-completeness when it's given with a new problem!!! How to prove that any minimum vertex cover of a clique of size$n$must have exactly$n-1vertices? ### StackOverflow #### Why did the Scala language choose the names eq and ne to compare references? [on hold] I understand that Scala choose == and != as the replacement of equals(in most cases). I feel the Scala's == is like the JavaScript's. For the same reason, I would expect Scala choose === and !== to replace Java's == and !=, like JavaScript did. But Scala surprised me by choosing eq and ne for reference comparison. These names seem coming from Lisp, which are not commonly seen in a curly-bracket language like C/C++/Java/JavaScript/C#. So, why did they choose the names eq and ne to compare references? ### /r/netsec #### Why It's Time for Terms Like Static and Dynamic Analysis to Die ### CompsciOverflow #### please help me to identify the state of the art algorithms in depth first branch and bound i am thinking of undertaking a research project in constraint optimization. the problem is a combinatorial search in a tree with a fixed goal depth. among the approaches that i am considering is depth first branch and bound. i'd like to know what the current state of the art is in branch and bound depth first search before i rule out or pursue this avenue. i'm interested in such things as value ordering (i.e. which node to expand next) and cost modeling of leaves to inform the search heuristic. ### /r/netsec #### TCP32764 backdoor again ### Lobsters #### Buggy Security Guidance from Apple ### /r/clojure #### Clojure questions by a newbie: how do you package/distribute? how do you do make desktop applications? How do you package Clojure applications for end-users that are not developers? Is it easy/possible to make desktop applications with Clojure? submitted by TheMagicHorsey [link] [7 comments] ### Fefe #### Die Amerikaner wollen Russlands neuestem Spionageflugzeug ... Die Amerikaner wollen Russlands neuestem Spionageflugzeug keine Überflugerlaubnis erteilen. Es gibt da seit 1992 (aber erst 2002 unterzeichnet) einen Vertrag als Teil der Atomabrüstung, der es den Teilnehmerländern erlaubt, mit ihren Spionagefliegern und den jeweils besten Sensoren über die anderen Länder zu fliegen, damit sie unabhängig prüfen können, ob die heimlich atomar aufrüsten, oder ihr Militär mobilisieren oder herumkarren. Und jetzt haben die Russen offenbar einen neuen Sensor am Start, der die US-Militärs so in Sorge versetzt, dass sie das nicht mehr zulassen wollen. ### StackOverflow #### Benefit of defining a trait whose purpose is only to mix into an object? In scalaZ (file Id.scala), the context is like the following /** Mixed into object Id in the package object [[scalaz]]. */ trait IdInstances { .... } object Id extends IdInstances  So why not directly use the following without first defining IdInstances? What's the benefit of doing like in Id.scala? object Id { ...//with the same context from within IdInstances }  #### How can I have an optional Sbt setting? There is a project shared with multiple participants. Some participants installed a global sbteclipse at ~/.sbt/0.13/plugins/plugins.sbt, while other participants didn't. I want to put some sbt settings in the project's build.sbt, like: EclipseKeys.createSrc := EclipseCreateSrc.Unmanaged + EclipseCreateSrc.Managed + EclipseCreateSrc.Source  I wish to apply these settings only for those participants who have installed a global sbteclipse, and do not affect others. How can I achieve that? #### sbt eclipse command changed files and packages I created a new Scala project in eclipse then added a package and Scala object , So far so good ... i want to add external library so i added a project folder with build.properties plugins.sbt files,and another file build.sbt in the root project. in the terminal i compiled successfully the project with the sbt compile task. the problem is that after sbt eclipse command the eclipse project changed from Scala project to something else.... all the packages changed to simple folders and the Scala project is ruined • scala IDE :Build id: 3.0.3-20140327-1716-Typesafe • scala version :2.10.4 • sbt version:0.13.0 you can see in the image #### Handling connection failures in apache-camel I am writing an apache-camel RabbitMQ consumer. I would like to react somehow to connection problems (i.e. try to reconnect). Is it possible to configure apache-camel to automatically reconnect? If not, how can I find out that a connection to the queue was interrupted? I've done the following test: • start the queue (and some producer) • start my consumer (it was getting messages as expected) • stop the queue (the messages stopped arriving, as expected, but no exception was thrown) • start the queue (no new messages were received) I am using camel in Scala (via akka-camel), but a Java solution would be probably also OK ### /r/compsci #### What would happen if all of the planet's source code was suddenly made opensource? GPLv3 opensource for instance. submitted by daf121 [link] [83 comments] ### StackOverflow #### scala pickling in the nontrivial case Pickle that? abstract class T case class F (val l: List [T]) extends T //F (val l: [...Seq, Vector, Array...] [T]) case class I (val i: Int)extends T val p = F(List(I(1),F(List(I(2),F(List(I(1),F(List(I(2))))))))).pickle val s = p.value val u = s.unpickle[F] // [T] println(u)  and what is happening? [error] ... [error] ... [error] ...  q.e.d. The good thing about scala/pickling is that it works in some cases. #### Can't configure SBT dependency: object XYZ is not a member of package I am working with IntelliJ IDEA 13.1.1 in order to set up a Scala SBT Project. My build.sbt looks like this: name := "MyProject" version := "1.0" scalaVersion := "2.10.4" libraryDependencies ++= Seq ( "org.scala-lang" % "scala-swing" % "2.10.4", "org.jfree" % "jfreechart" % "1.0.17" )  As for scala-swing it perfectly works in my project. But there are problems in following line: import org.jfree.data.xy.{XYDataset, DefaultXYDataset}  There is no error syntax highlighting in IDEA but it still says following durring compilation: Error:(6, 12) object jfree is not a member of package org import org.jfree.data.xy.{XYDataset, DefaultXYDataset} If I add the same jfreechart library using Maven it does compile. But I really want to have everything set up just with SBT. Does anyone know the solution? #### Scheme - Help Writing A Function I'm trying to write a function in Scheme that takes two strings as input and returns a list of all optimal pairs of strings. In order to do this, I know that I need to make use of the following functions that I have already written. The functions already written will obviously need to be used as helper functions for the function that I'm trying to write. 1. alignment-score-tail This function takes two strings, and scores each character according to a scoring criteria and accumulates the result. If two characters are equal, a score of 2 is obtained, if two characters are not equal, a score of -1 is obtained, and finally, if one character is an underscore, and the other character something elses, a score of -2 is obtained. Here is example input/output: > (alignment-score-tail "x" "x") 2 > (alignment-score-tail "x" "y") -1 > (alignment-score-tail "x" "_") -2 > (alignment-score-tail "Hello" "__low") -4  2. change-string-pairs This function takes two chars (a and b, say) and a list of pairs of strings as input, and returns a modified list of pairs of strings: for each string pair in the list. Here is example input/output: > (change-string-pairs "a" "b" '(("one" "two")("three" "four")("five" "six"))) (("aone" "btwo") ("athree" "bfour") ("afive" "bsix"))  3. get-best-pairs This function takes both a scoring function (scoring function in this case will be alignment-score-tail, which is described above) and a list of pairs of strings as input and then returns a modified list of pairs of strings. The returned list will contain all the optimal string pairs from the input, scored according to the input function. Here is example input/output: > (get-best-pairs alignment-score-tail '(("hello" "b_low")("hello_" "b_l_ow")("hello" "_blow")("hello" "blow")("h_e_llo" "bl_o__w"))) (("hello" "b_low") ("hello_" "b_l_ow") ("hello" "_blow"))  Having all these functions described above that I have already written, the function that I'm trying to write using those functions will have the following: Input/output: > (get-all-best-pairs "hello" "blow") (("hello" "b_low") ("hello_" "b_l_ow") ("hello_" "b__low") ("hello" "_blow") ("hello_" "_bl_ow") ("hello_" "_b_low") ("hello_" "__blow"))  Would be really great, if I can see how this can be done. Also, in my functions that I have written above, I have made use of some built in Scheme functions like map, filter, append and apply. I also am aware, that the algorithm is extremely inefficient, and is of exponential complexity. That is not of a concern to me at this time. ### High Scalability #### Stuff The Internet Says On Scalability For April 18th, 2014 Hey, it's HighScalability time: Scaling to the top of "Bun Mountain" in Hong Kong • 44 trillion gigabytes: size of the digital universe by 2020; 6 Times: we have six times more "stuff" than the generation before us. • Quotable Quotes: • Facebook: Our warehouse stores upwards of 300 PB of Hive data, with an incoming daily rate of about 600 TB. • @windley: The problem with the Internet of Things is right now it’s more like the CompuServe of Things • Chip Overclock: If you want to eventually generate revenue, you must first optimize for developer productivity; everything else is negotiable. • @igrigorik: if you are gzipping your images.. you're doing it wrong: http://bit.ly/1pb8JhK - check your server configs! and your CDN... :) • Seth Lloyd: The arrow of time is an arrow of increasing correlations. • @kitmacgillivray: When will Google enable / require all android apps to have full deep search integration so all content is visible to the engine? • Neal Ford: Yesterday's best practices become tomorrow's anti-patterns. • Rüdiger Möller: just made a quick sum up of concurrency related issues we had in a 7-15 developer team soft realtime application (clustered inmemory data grid + GUI front end). 95% of threads created are not to improve throughput but to avoid blocking (e.g. IO, don't block messaging receiver thread, avoid blocking the event thread in a GUI app, ..). • Ansible: When done correctly, automation tools are giving them time back -- and helping out of this problem of needing to wear many hats. • Amazon and Google are in an epic battle to dominate the cloud—and Amazon may already have won: If Amazon’s entire public cloud were a single computer, it would have five times more capacity than those of its next biggest 14 competitors—including Google—combined. Every day, one-third of people who use the internet visit a site or use a service running on Amazon’s cloud. • What books would you select to help sustain or rebuild civilization? Here's Neal Stephenson’s list. He was too shy to do so, but I would certainly add his books to my list. • 12-Step Program for Scaling Web Applications on PostgreSQL. Great write up of lessons learned by wanelo.com that they used to sustain 10s of thousand of concurrent users at 3K req/sec. Highly detailed. 1) Add more cache, 2) Optimize SQL, 3) Upgrade Hardware and RAM, 4) Scale reads by replication, 5) Use more appropriate tools, 6) Move write heave table out, 7) Tune Postures and your File System, 8) Buffer and serialize frequent updates, 9) Optimize DB Schema, 10) Shard busy tables vertically, 11) Wrap busy tables with services, 12) Shard services backend horizontally. • Is this a soap opera? It turns out Google and not Facebook is buying Titan Aerospace, makers of high flying solar powered drones. Google's fiber network would make a great backbone network for a drone and loon powered wireless network, completely routing around the telcos. • Building Carousel, Part I: How we made our networked mobile app feel fast and local. Dropbox changes to an eventualy consistent / optimistic replication syncing model to make their app "feel fast, responsive, and local, even though the data on which users operate is ultimately backed by the Dropbox servers." Lesson: don’t block the user! Instead of requiring changes to be propagated to the server synchronously. Local and remote photos are treated as equivalent objects. Actions take effect immediately locally then work there way out globally. Changes are used to stay consistent. A fast hash is used to tell what photos have not been backed up to dropbox. Remote operations happen asynchronously. Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge... ### /r/netsec #### Take the Hacker Psychology Survey - Do it for science! ### StackOverflow #### SBT IO.download(...) method show progress In SBT (I am using 0.13.0) there is sbt.IO object that provides some helpful methods. One can for example download files from internet like this: sbt.IO.download(new URL(...), file(...)) //my program freezes until end of this method  I am writing a sbt plugin and want to download some files from internet. I want to somehow show progress bar during download. That would be nice informing user that program still works showing him some info. How would you do this? ### CompsciOverflow #### Prove that the set cover (U,F) is NP-hard? [on hold] Prove that the set cover problem (U,F) is NP-hard, even if for each element from U, we have at most two subsets from the family F that cover this element ### Undeadly #### One week of OpenSSL cleanup After the news of heartbleed broke early last week, the OpenBSD team dove in and started axing it up into shape. Leading this effort are Ted Unangst (tedu@) and Miod Vallat (miod@), who are head-to-head on a pure commit count basis with both having around 50 commits in this part of the tree in the week since Ted's first commit in this area. They are followed closely by Joel Sing (jsing@) who is systematically going through every nook and cranny and applying some basic KNF. Next in line are Theo de Raadt (deraadt@) and Bob Beck (beck@) who've been both doing a lot of cleanup, ripping out weird layers of abstraction for standard system or library calls. Then Jonathan Grey (jsg@) and Reyk Flöter (reyk@) come next, followed by a group of late starters. Also, an honorable mention for Christian Weisgerber (naddy@), who has been fixing issues in ports related to this work. All combined, there've been over 250 commits cleaning up OpenSSL. In one week. Some of these are simple or small changes, while other commits carry more weight. Of course, occasionally mistakes get made but these are also quickly fixed again, but the general direction is clear: move the tree forward towards a better, more readable, less buggy crypto library. ### StackOverflow #### scalac for Call-by-Name use references I have some function: def f(x: Int) = x * x  and then I call it: var y = 0 f { y += 1; y }  Bytecode generated for above code looks like:  0: iconst_0 1: istore_1 2: aload_0 3: iload_1 4: iconst_1 5: iadd 6: istore_1 7: iload_1 8: invokevirtual #18 // Method f:(I)I 11: pop 12: return  If I change function def f(x: Int) to represent Call-by-Name: def f(x: => Int) = x * x  generated bytecode for the same part of code looks like:  0: new #24 // class scala/runtime/IntRef 3: dup 4: iconst_0 5: invokespecial #28 // Method scala/runtime/IntRef."<init>":(I)V 8: astore_1 9: aload_0 ....  My question is: Is it a rule that for Call-by-Name we oparate on references or it depends on semantic analysis phase in compilation? #### UnsatisfiedLinkError with native library under sbt I'm using sbt 0.13 and have issues using the leveldbjni native library under sbt (even after issue #358 has been resolved). A similar issue has already been reported for which sbt 0.13 should provide a solution but it seems it doesn't. So I'm sharing my observations here. I'm getting an UnsatisfiedLinkError with the following example application. • build.sbt name := "example" version := "0.1" scalaVersion := "2.10.2" libraryDependencies += "org.fusesource.leveldbjni" % "leveldbjni-all" % "1.7"  • build.properties  sbt.version=0.13.0  • Example.scala import org.fusesource.leveldbjni.internal._ object Example extends App { NativeDB.LIBRARY.load() // loading succeeds new NativeOptions() // UnsatisfiedLinkError under sbt }  I'm using Oracle JDK 1.7 and OS X 10.8.5. Running the example with run-main Example under sbt gives [error] (run-main) java.lang.UnsatisfiedLinkError: org.fusesource.leveldbjni.internal.NativeOptions.init()V  whereas running it with java -cp scala-library.jar:example_2.10-0.1.jar:leveldbjni-all-1.7.jar Example  just works fine. The application even runs successfully when Scala is on the bootclasspath: java -Xbootclasspath/a:scala-library.jar -cp example_2.10-0.1.jar:leveldbjni-all-1.7.jar Example  Any ideas why there's an UnsatisfiedLinkError only under sbt? ### Planet Theory #### Announcements Time for a short rundown of announcements. • STOC will be held May 31-June 3 in New York City. Early registration and hotel deadline is April 30. Student travel support requests due by this Monday. • The newly renamed ACM Conference on Economics and Computation (EC '14) will be held in Palo Alto June 8-12. Early registration deadline is May 15. Hotel deadline is May 19th but the organizers suggest booking early because Stanford graduation is June 13. • The Conference on Computational Complexity will be held June 11-13 in Vancouver. Local arrangements information will be posted when available. • The ACM Transactions on Algorithms is searching for a new Editor-in-Chief. Nominations due May 16. • Several of the ACM Awards have been announced. Robert Blumofe and Charles Leiserson will receive the Paris Kanellakis Theory and Practice Award for their "contributions to robust parallel and distributed computing." • Belated congratulations to new Sloan Fellows Nina Balcan, Nayantara Bhatnagar, Sharon Goldberg, Sasha Sherstov, David Steurer and Paul Valiant. ### TheoryOverflow #### Complete combinator basis for System F-omega The S and K combinators form a complete (and Turing complete) basis when untyped. Within the Hindley-Milner type-system, and I believe within systemF$as well, S and K can encode any well-typed function and, with the addition of the Y combinator, you gain Turing completeness (and adding other recursion combinations yields different recursive classes). I highly suspect that the same can be done for System$F_{\omega}$but I'm not sure how quite to do this. Is there a set of combinators that form a complete basis typed under system F-omega? Additionally, in system$F$type checking is only decidable with type hints (Church style). Would a combinator basis for system$F$still have decidable type checking? If true would this still also be true of a complete basis for system$F_\omega$? What about complete basis and decidability of type checking for Per Martin-Löf typing systems? ### StackOverflow #### java.lang.NoClassDefFoundError when running Scala JUnit Test on Scala IDE (Eclipse Kepler) I've recently decided to install Scala IDE 3.0.3 (which is basicly Eclipse Kepler with scala plugin). I've newest specs (specs2_2.10-23.11), scalaz (2.10-7.0.4) and collection (scalaj-collection_2.10-1.5). I tried to run my tests in scala using "Scala JUnit Test" but i got this error java.lang.NoClassDefFoundError: scalaz/concurrent/Strategy$ at org.specs2.reporter.DefaultExecutionStrategy$$anonfunexecute1$$anonfun$2.apply(ExecutionStrategy.scala:43) at org.specs2.reporter.DefaultExecutionStrategy$$anonfunexecute1$$anonfun$2.apply(ExecutionStrategy.scala:41) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.specs2.reporter.DefaultExecutionStrategy$$anonfunexecute1.apply(ExecutionStrategy.scala:41) at org.specs2.reporter.DefaultExecutionStrategy$$anonfun$execute$1.apply(ExecutionStrategy.scala:38) at scalaz.syntax.IdOps$class.$bar$greater(IdOps.scala:15) at scalaz.syntax.ToIdOps$$anon1.bargreater(IdOps.scala:78) at org.specs2.reporter.JUnitReporterclass.report(JUnitReporter.scala:44) at org.specs2.runner.JUnitRunner$$anon$4.report(JUnitRunner.scala:43) at org.specs2.runner.JUnitRunner.run(JUnitRunner.scala:50) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.lang.ClassNotFoundException: scalaz.concurrent.Strategy$ at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 17 more

What causing that? I probably missing something but I can't find what.

My tests are running just fine with gradle.

#### Re-def vars in Clojure(script)

I'm trying to find the idiomatic way to defer the initialization of a var (which I really intend to be immutable).

(def foo nil)
...
(defn init []
; (def foo (some-function))
; (set! foo (some-function)))


I know Rich Hickey said re-defing isn't idiomatic. Is set! appropriate here?

### StackOverflow

#### log4j properties files based on leiningen test metadata?

How can I use different log4j properties files based on leiningen test metadata? I have functions that have debug logging output to a file. Often, there is a lot of data being written to this debug log file, slowing down the function. Normal runs of the application will not have debug file writing, so I want to benchmark the normal running of the function without that file writing. For benchmarking, I am using criterium. Let's assume that the metadata for benchmarking deftest defitions is :benchmark.

### /r/emacs

#### Default kill-buffer behavior

The default kill-buffer behavior (C-x k RET) opens the last buffer visited which currently isn't already opened somewhere else.

Can I change this behavior so that it will repoen the last buffer in that window regardless of whether that buffer is already opened in some other frame?

Also, how could I figure this kind of thing out on my own? I looked here, here and here but couldn't find any information about this.

Thanks, -E

submitted by 23421543

### /r/netsec

#### GrrCon - Oct 16-17 2014 - Grand Rapids, MI

Hey all,

Just wanted to make all you midwestern Redditors aware of GrrCon, happening in October. If you are a student and you register for tickets with your .edu address, you can get 100 bucks off the ticket price (regular is $150, student is$50). It's an awesome conference filled with a great, international speaker lineup. If you have any other questions about the student pricing go to http://grrcon.com/attendance13 or toss me a DM.

There are two awesome keynotes this year by Dave Kennedy and Jayson Street.

Ticket price includes food, beer, and access to the after party featuring Henry Rollins.

From the about page: GrrCON is an information security and hacking conference put together to provide the community with a venue to come together and share ideas, information, solutions, forge relationships, and most importantly engage with like minded people in a fun atmosphere without all the elitist “Diva” nonsense. We bring together the CISO, the hacker, the security practitioner, and the researcher in a one-of-a-kind experience you CANNOT get elsewhere.

We provide three+ presentation tracks, in-con workshops, pre-con training, and a solutions arena to ensure you get the most out of the event. Come join the conversation.

GrrCon: No egos, No Divas, just a good time and good content.

EDIT:

I was just made aware of /r/grrcon that was created by /u/quizbuk to track the CTF and other things relating to the conference.

submitted by b31tf4ce

### CompsciOverflow

#### Linked lists and patterns python [migrated]

Trying to write a function that will iterate over the linked list, sum up all of the odd numbers and then display the sum. Here is what I have so far:

def main():
array = eval(input("Give me an array of numbers: "))
ArrayToList(array)
print(array[0])
print(array[1])
print(array[2])
print(array[3])
print(sumOdds(array))

def isOdd(x):
return x % 2 != 0

def sumOdds(array):
if (array == None):
return 0
return head(array) + sumOdds(tail(array))
else:
return sumOdds(tail(array))
main()


I can't get it to actually print the sum though. Can anybody help me out with that?

Here is the output of the program when I run it:

$python3 1.py Give me an array of numbers: [11, 5, 3, 51] Traceback (most recent call last): File "1.py", line 22, in <module> main() File "1.py", line 10, in main print(sumOdds(array)) File "1.py", line 19, in sumOdds return head(array) + sumOdds(tail(array)) File "1.py", line 18, in sumOdds elif (isOdd(head(array))): File "/Users/~/cs150/practice3/friday/List.py", line 34, in head return NodeValue(items) File "/Users/~/cs150/practice3/friday/List.py", line 12, in NodeValue def NodeValue(n): return n[0] TypeError: 'int' object is not subscriptable  ### StackOverflow #### In Scala, how to test the type of an 'Any' object against a type with type parameter? I am trying to get a type-safe way of converting the result of parsing a JSON string. I want to check whether a field is Map[String, any] or a plain string. My first attempt is def test(x:Any) = { x match { case m:Map[String,Any] => ... ... }  This causes "non-variable type argument String in type pattern Map[String,Any] is unchecked since it is eliminated by erasure" Looking through the document of TypeTag and ClassTag, I could not find a good way to accomplish that. The following code does not cause the warning, but I wonder why it works. type StringMap = Map[String,Any] def test(x:Any) = { x match { case m:StringMap => ... ... }  ### CompsciOverflow #### What can Idris not do by giving up Turing completeness? I know that Idris has dependent types but isn't turing complete. What can it not do by giving up Turing completeness, and is this related to having dependent types? I guess this is quite a specific question, but I don't know a huge amount about dependent types and related type systems. #### Convex Hull algorithm - why it can't be computed using only comparisons Say I want to compute a covnex hull of given points on the plane. I would like to write an algorithm, that only compares the points and doesn't do any arithmetic operations. Wikipedia states, that: The standard$\Omega(n \log n)$lower bound for sorting is proven in the decision tree model of computing, in which only numerical comparisons but not arithmetic operations can be performed; however, in this model, convex hulls cannot be computed at all. Why is it so? I can't find any justification for it anywhere, I know it to be intuitively true, but how come it's a necessity? ### TheoryOverflow #### Notion of 'order type' for non-well ordered sets? Given a well-ordered poset$P = (S,\leq)$, it is possible to define its order type as the supremum of the order types of its linear extensions; this is well-defined as shown in 'Well-partial orderings and hierarchies' by de Jongh & Parikh. Suppose now that we have a poset$P$that is non-well ordered. In general it is possible to embed 'arbitrarily complex' well-orders in$P$, e.g. if$P$is$(\mathbb{R},\leq)$then all countable ordinals can be embedded in$P$. Is it still possible to assign an order type$\lambda$to$P$such that every 'reasonable embedding' of a well-order into$P$has order type$\leq \lambda$? The most basic situation in which I'm interested is the poset of finite permutations ordered by pattern involvement. It seems intuitively that the well-ordered posets obtainable in this way cannot be too complex - for instance, we can embed the plane trees ordered by topological minors, which have order type$\epsilon_0$if I'm correct. ### Dave Winer #### WordPress-to-OPML source As promised, I have released the source for the server that converts a WordPress blog into a single Fargo-editable outline. It's written in JavaScript and runs in node.js. The format is OPML, which has many other uses. It's provided under the MIT license. https://github.com/scripting/wp2opml BTW, you'll find a link to this server and all my other source releases in the GitHub menu at the top of every post on this blog. ### StackOverflow #### generic case class to json using writter I am trying to convert this model to Json but I always got the error "No apply function found matching unapply parameters". I tried to implement two different writters for doing this but both do not work. Here is my model: case class Page[T]( var data: List[T], var previous: String, var next: String, var totalPageCount: Int)(implicit val tWrites: Writes[T]) object PageScala { // Both Writters generate an "No apply function found matching unapply parameters" error implicit val pageWrites = Json.writes[Page[_]] implicit def pageWriter[T]: Writes[PageScala[T]] = new Writes[PageScala[T]] { def writes(o: PageScala[T]): JsValue = { implicit val tWrites = o.tWrites val writes = Json.writes[PageScala[T]] writes.writes(o) } } }  Does anyone has a solution ? ### CompsciOverflow #### Question about Kannan's theorem [migrated] I was reading a paper of Buhrman and Homer "Superpolynomial Circuits, Almost Sparse Oracles and the Exponential Hierarchy" (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.47.2228&rep=rep1&type=pdf). On the bottom of page 2 they remark that the results of Kannan imply that$NEXPTIME^{NP}$does not have polynomial size circuits. I know that in the exponential time hierarchy,$NEXPTIME^{NP}$is just$\Sigma_2EXP$, and I also know that Kannan's result is that$\forall c\mbox{ }\exists L\in\Sigma_2P$such that$L \not\in Size(n^c)$. Of course, Kannan's theorem is NOT saying$\Sigma_2P \not\subset P/poly$(in order for that to be the case we would need to show that$\exists L\in\Sigma_2P$such that$\forall c$,$L \not\in Size(n^c)$. However, I don't see how Kannan's result implies that$NEXPTIME^{NP} \not\subset P/poly$? ### /r/compsci #### How many CS professionals would benefit from mentoring/course/program about solving the most challenging business problems? (Validating this, I need your answers!) I started out really technical in CS, I have seen my value shift drastically from technical expertise towards helping people in organizations to solve their challenging business problems using custom developed (or enhanced) software systems. Specifically, I mean business process problems such as “developing customer relationships,” “reporting sales,” and “managing customer orders.” I wonder if there are others out there who might want to apply their technical skills more closely to people’s underlying problems, but just feel stuck, don't know how/where to get started. If a mentoring/course/program opportunity (format TBD) existed to help CS professionals to better relate their work to the people who will use the system, would you be interested? (Yes=Up Vote No=Down Vote). I botched my earlier explanation, so I’m providing some additional detail on potential subjects that could be included based on this feedback. If nothing comes of this, then at least this list of subjects could provide a basis for people to do their own exploration. Stakeholders- know the people and organizations who will use and interact with the system you are developing, and how they will ultimately judge your deliverables • explore the people who will interact with the system, and the environments in which they operate • identify and understand stakeholder incentives, and how they impact the system behavior and performance • visualize stakeholder interactions, including unanticipated conflicts that may impact your work • interact with business process experts, and understand how the developed system will actually be used • gauge the business benefits are important to your stakeholders, and identify the value that they will place on it Technical features- the details of your work, and how they impact your stakeholders • select features and specifications that actually deliver value based on stakeholder benefits • simultaneously compare advantages and disadvantages of multiple features, including unexpected interactions • present combinations of features to your stockholders, and clearly communicate the impact to them • discuss technical details with your stakeholders in ways that they will clearly communicate the business impact Concepts and testing- the results of your work, and how it will be received by your stakeholders • facilitate the selection of different system concepts and prototypes based on stockholder impact • anticipate business and technical risks before they become deliverable crises • design "real options" to accommodate unavoidable future uncertainties that might impact your client • design testing scenarios and plans based on your stakeholders realistic business process and environment • negotiate testing criteria that provide measurable goals for predictable completion of your work Others pointed out that academic options exist to serve this purpose, which is an excellent point. I have engineering and business degrees from Cornell and MIT, so I’m fairly aware of the established material and overlaps that exist. Also, I know this might not be for everyone. That is ok. If possible, please let me know why you would or would not want it, or any other feedback. Thanks! submitted by surfman49 [link] [24 comments] ### TheoryOverflow #### Help in NP-Hardness proof of a certain type of Class Cover problem Class Cover Problem is nothing but finding an optimal cover of certain class (Point Set) with a particular shape only i.e. finding minimum number of a certain shaped polygon (for example, rectangle) required to cover the Point Set (S). Well, covering by Rectangles is proved to be NP-Hard and now, I want to consider a different shape say L-shape and now find out covering by it. So, can somebody help me in proving that this problem is NP-Hard too. I've worked on it a bit, tried reducing it from known NP-Hard problem but couldn't quite get it. ### QuantOverflow #### if technical analysis rules for predict stock prices is unique for all cases, why should we learn neural networks? [on hold] Is there any neural network out of the box tool that was already learned all technical rules by feeding many stock trading data? ### /r/compsci #### Does a location aware Bloom Filter exist? The standard Bloom filter does not admit false negatives, but allows false positives. These false positives are meant to be evenly distributed (I think, at least they don't appear to usefully cluster in my trials). What if I had a binary image (or anything else spatial). Then I care if it mispredicts a 1 in the area filled with 0s, but I don't really care if it mispredicts a 1 in an area filled with 1s. Does such spatially aware sketch data-structure exist? submitted by keije [link] [7 comments] ### /r/netsec #### iOS Malware Campaign "Unflod Baby Panda" ### DataTau #### Mining Social Web APIs with IPython Notebook ### StackOverflow #### Scala deep type cheking We have a function that can returns anything: def func: AnyRef  And we need to check if return value is a Tuple2[String, String]  or List[Tuple2[String, List[String]]]  or List[Tuple2[String, List[Int]]]  or anything else. What is the right way to do that? ### Fred Wilson #### Feature Friday: Comedy On SoundCloud Where is the next Howard Stern going to emerge? I don’t think it will be terrestrial or satellite radio. I think its more likely that he or she will emerge from a place like our portfolio company SoundCloud. There is a ton of comedy on SoundCloud and its growing very fast. But discovery has been a problem. In the most recent Android release, SoundCloud has introduced some very nice discovery features. These features also exist on the web and will be coming to iOS soon. Since the way we most likely want to listen to the next Howard Stern is by bluetoothing our phone to our car when we are driving to and from work, I will show you how to listen to comedy on SoundCloud using the Android app flow. It is very similar on the web. First, you open up the app menu by tapping on the upper left of the app and get this: Next you click on Explore to get this: Then you select Comedy to get this: Each of these “cards” represents a potential new Howard Stern show. You select one and start listening. If you find one you really like, you can follow in SoundCloud and get the next show right in your feed. If you are driving to and from work and are looking for something good to listen to, I’d strongly recommend checking out some of these comedy shows on SoundCloud. They are great. ### StackOverflow #### Are purely functional data structures always lock-free? I've seen this claimed in several places, including on SO: http://stackoverflow.com/a/20467457/243238, http://stackoverflow.com/a/4400389/243238. I get the point that locks are not needed to modify the data but you end up with multiple versions of it after concurrent modifications. That doesn't seem very useful in practice. I've tried to describe this with a simple scenarioa below: Let's say we have 2 threads A and B. They are both modifying a purely functional dictionary D. They don't need locks at this point because the data is immutable so they output new dictionaries DA and DB. Now my question is how do you reconcile DA and DB so that later operations can see a single view of the data? EDIT: The answers are suggesting to use a merge function over DA and DB. I don't see how that solves the problem since there could be another thread C which runs concurrently with the merge operation. The claim is that purely functional data structures are lock-free but using a merge function sounds more like eventual consistency which is a different thing. ### Planet Theory #### Lecture 12 -- Privacy Yields an Anti-Folk Theorem in Repeated Games Last week, Kobbi Nissim gave us an excellent guest lecture on differential privacy and machine learning. The semester has gone by fast -- this week is our last lecture in the privacy and mechanism design class. (But stop by next week to hear the students present their research projects!) Today we'll talk about infinitely repeated games. In an infinitely repeated game, n players repeatedly, in an infinite number of stages, play actions and obtain payoffs based on some commonly known stage game. Since the game is infinitely repeated, in order to make sense of players total payoff, we employ a discount factor delta that specifies how much less valuable a dollar is tomorrow compared to a dollar today. (delta is some number in [0, 1) ). In games of perfect monitoring, players perfectly observe what actions each of their opponents have played in past rounds, but in large n player games, it is much more natural to think about games of imperfect monitoring, in which agents see only some noisy signal of what their opponents have played. For example, one natural signal players might observe in an anonymous game is a noisy histogram estimating what fraction of the population has played each type of action. (This is the kind of signal you might get if you see a random subsample of what people play -- for example, you have an estimate of how many people drove on each road on the way to work today by looking at traffic reports). Alternately, there may be some low dimensional signal (like the market price of some good) that everyone observes that is computed as a randomized function of everyone's actions today (e.g. how much of the good each person produced). A common theme in repeated games of all sorts are folk theorems. Informally, these theorems state that in repeated games, we should expect a huge multiplicity of equilibria, well beyond the equilibria we would see in the corresponding one-shot stage game. This is because players observe each other's past behavior, and so can threaten each other to behave in prescribed ways or else face punishment. Whether or not a folk theorem is a positive result or a negative result depends on whether you want to design behavior, or predict behavior. If you are a mechanism designer, a folk theorem might be good news -- you can try and encourage equilibrium behavior that has higher welfare than any equilibrium of the stage game. However, if you want to predict behavior, it is bad news -- there are now generically a huge multiplicity of very different equilibria, and some of them have much worse welfare than any equilibrium of the stage game. In this lecture (following a paper joint with Mallesh Pai and Jon Ullman) we argue that: 1. In large games, many natural signaling structures produce signal distributions that are differentially private in the actions of the players, where the privacy parameters tends to 0 as the size of the game gets large, and 2. In any such game, for any discount factor delta, as the size of the game gets large, the set of equilibria of the repeated game collapse to the set of equilibria of the stage game. In other words, there are no "folk theorem equilibria" -- only the equilibria that already existed in the one shot game. This could be interpreted in a couple of ways. On the one hand, this means that in large games, it might be harder to sustain cooperation (which is a negative result). On the other hand, since it shrinks the set of equilibria, it means that adding noise to the signaling structure in a large game generically improves the price of anarchy over equilibria of the repeated game, which is a positive result. ### CompsciOverflow #### Control flow graphs - Tree decomposition Considering above terminologies for drawing control flow graphs for any program, it is very simple. For example : While A if B do .. else do .. end while  For above example, while doing decomposition, I can say its D2 (D1) i.e while and then inside while, its if-then-else. Considering same situation. How can you represent CONTINUE and BREAK statements Ever for FOR statement there is no defined terminology like for while and if then else. FOR statement falls under while. My prof says in theory, there is nothing about Break and continue statement and I couldnt find anything online too. For example : # include <stdio.h> int main(){ float num,average,sum; int i,n; printf("Maximum no. of inputs\n"); scanf("%d",&n); for(i=1;i<=n;++i){ printf("Enter n%d: ",i); scanf("%f",&num); if(num<0.0) break; //for loop breaks if num<0.0 sum=sum+num; } average=sum/(i-1); printf("Average=%.2f",average); return 0; }  Control flow graph for this is as below: the nodes has line number written : (Sorry the image is side ways) I decomposed this as : P1;P1;D2(P1;P1;D1);P1 * P1 represents set of statements outside loops  I'm not sure if this is correct. My professors says to use something as D22 for break, like create a new term from above image. My main question here is the decomposition. I Know that i drew the CFG correctly, but is the decomposition correct according to the first image?. The break state kind of creates a while as you can see in CFG, but i'm not sure if it has to be considered as while and I guess we cannot, as per my professor. I am working on this and wanted to know, if anyone has come across something for Break and continue statements while decomposition of graphs, please let me know. Thanks. PS : Please, Let me know, if am unclear and if anymore info is needed. I can probably write down an example and upload the picture. ### StackOverflow #### How to access and send message to ZeroMQ from Tornado handler? How to access and send message to ZeroMQ from Tornado handler ? I have try: context = zmq.Context(1) # Socket facing clients frontend = context.socket(zmq.XREP) frontend.bind("tcp://*:5559") # Socket facing services backend = context.socket(zmq.XREQ) backend.bind("tcp://*:5560") zmq.device(zmq.QUEUE, frontend, backend) except Exception, e: print e print "bringing down zmq device" finally: pass frontend.close() backend.close() context.term()  as standalone program which communicate with others but how from handler to put on same queue, do I need to create context every time or no ? ### TheoryOverflow #### Faithful functors vs forgetful functors: exact category-theoretic defs? In category theory, a functor between two categories$C,D$is a map$F$that assigns to each object (resp. morphism)$x$of$C$a corresponding object (resp. morphism)$F(x)$of$D$by respecting the incidence relations. For each pair of objects$x,y$of$C$, we may then define a map$F_{x,y}$that takes any morphism$m : x \rightarrow y$to a morphism$F(m) : F(x) \rightarrow F(y)$. I understand that$F$is called faithful if every such mapping$F_{x,y}is injective, which means intuitively that the relational structure of the category is preserved, although the objects may not be. There is a related notion of forgetful functor for which I couldn't find a precise definition, so is there anyone willing to help? I mean, is it just the opposite of faithful or is it the combination of unfaithfulness with some other implicit property? ### Fefe #### TV-Empfehlung: Der Beckmann über die digitale Welt. ... TV-Empfehlung: Der Beckmann über die digitale Welt. Ist natürlich am Ende des Tages immer noch eine TV-Talkshow, aber für Talkshow-Verhältnisse durchaus empfehlenswert. Für mich überraschend wirkte keiner der Gäste uninformiert oder hat bloß Phrasen abgesondert. Update: Ein paar Gedanken zu der Sendung. Gabriel hat für die SPD mehr Rückgrat gezeigt als ich mich je gesehen zu haben erinnern kann. Nicht nur hat er Google mit der Zerschlagung gedroht, er hat auch das Gesprächsangebot von Eric Schmidt öffentlich gemacht, anstatt es einfach anzunehmen, wie sonst üblich. Das empfand ich als massiven Stinkefinger in Richtung Google. Mein Eindruck ist, dass der Gabriel sich jetzt anhand von Google als Internet-Freiheits-Datenschutz-Politiker aufbauen möchte. Mein anderer Gedanke ist, dass der Alibi-Internet-Unternehmer erstaunlich eloquent und als helles Köpfchen rüberkam. Das ist mal ein deutlicher Unterschied zum üblichen Talkshow-Einerlei, wo man da einen stammelnden "Betroffenen" hat, der nur da sitzt, um den Leuten einen Betroffenen zeigen zu können. Mir fiel auf, dass der darauf hinwies, dass er ja stark zwischen geschäftlich und privat trennt. Das ist ein Wink mit dem Zaunpfahl, dessen Signifikanz anscheinend in der Runde keiner so direkt aufgefangen hat. Es heißt, dass der Mann selber damit unzufrieden ist, dass er Daten sammeln muss. Er weiß, dass die Kunden das nicht wollen, dass das unmoralisch ist, und dass er da was verwerfliches tut. Er tut es, weil er glaubt, sonst nicht konkurrenzfähig zu sein. Die wichtige Lektion dabei ist: Von ihm und seinen Kollegen ist keine Gegenwehr zu erwarten bei Regulierungsversuchen. Nur so Token-Gegenwehr, um die Investoren zu beruhigen. Eigentlich hätte keiner von denen ein Problem damit, mit dem Profiling aufzuhören, wenn das ab morgen verboten wäre. Die einzigen, die da echt gegen opponieren würden, sind Unternehmen, die sich als supranational sehen, wie Google. Unternehmen, die mit dem Profiling andere, etablierte Unternehmen gerade aus dem Markt schmeißen. Google hätte aus meiner Sicht in ein paar Jahren auch kein Problem mehr, auf das Profiling zu verzichten, wenn sie den Markt von Werbung, Tracking und Versicherungen komplett übernommen haben. Die Märkte, in denen ihre Profiling-Kompetenz ihnen Vorteile verschafft. Aber bis dahin brauchen sie das noch. Update: Einen Gedanken noch. Juli Zeh spricht in der Sendung an, dass wir in Europa bald die Wahl haben zwischen Martin Schulz und Jean-Claude Juncker, und dass bitte alle entsprechend ihr Kreuzchen machen sollten. Ich hätte nicht gedacht, dass ich jemals nochmal etwas Positives über Sozialdemokraten sagen würde, aber Martin Schulz ist so dermaßen offensichtlich die bessere Wahl, dass ich an der Stelle empfehlen würde, auch im Bekanntenkreis ein bisschen Druck auszuüben. Wir sind alle aufgerufen, die Konservativen zu marginalisieren. Bei denen ist klar, dass wir eine neue Vorratsdatenspeicherung kriegen. ### CompsciOverflow #### Functional dependencies with the same key? let's consider a table with carID | hireDate | manufactory | model | custID | custName | outletNo | outletLoc  I want to evaluate all the functional dependencies to bring in first, second and then third normal form. • Functional dependencies carID,hireDate -> custID  • Partial dependencies carID->manufactory, model, outletNo**  • Transitive dependencies custID->custName outletNo->outletLoc  Since a car is in a outlet only I have in the partial dependecies this: carID->manufactory, model, outletNo**  However this leads to anomalies in insertion (imagine adding a car with no outlet), so should not that be like this? carID->manufactory, model carID->outletNo  But isn't this still an normalisation anomaly? ### StackOverflow #### Scheme - Help Writing A Funcion I'm trying to write a function in Scheme that takes two strings as input and returns a list of all optimal pairs of strings. In order to do this, I know that I need to make use of the following functions that I have already written. The functions already written will obviously need to be used as helper functions for the function that I'm trying to write. 1. alignment-score-tail This function takes two strings, and scores each character according to a scoring criteria and accumulates the result. If two characters are equal, a score of 2 is obtained, if two characters are not equal, a score of -1 is obtained, and finally, if one character is an underscore, and the other character something elses, a score of -2 is obtained. Here is example input/output: > (alignment-score-tail "x" "x") 2 > (alignment-score-tail "x" "y") -1 > (alignment-score-tail "x" "_") -2 > (alignment-score-tail "Hello" "__low") -4  2. change-string-pairs This function takes two chars (a and b, say) and a list of pairs of strings as input, and returns a modified list of pairs of strings: for each string pair in the list. Here is example input/output: > (change-string-pairs "a" "b" '(("one" "two")("three" "four")("five" "six"))) (("aone" "btwo") ("athree" "bfour") ("afive" "bsix"))  3. get-best-pairs This function takes both a scoring function (scoring function in this case will be alignment-score-tail, which is described above) and a list of pairs of strings as input and then returns a modified list of pairs of strings. The returned list will contain all the optimal string pairs from the input, scored according to the input function. Here is example input/output: > (get-best-pairs alignment-score-tail '(("hello" "b_low")("hello_" "b_l_ow")("hello" "_blow")("hello" "blow")("h_e_llo" "bl_o__w"))) (("hello" "b_low") ("hello_" "b_l_ow") ("hello" "_blow"))  Having all these functions described above that I have already written, the function that I'm trying to write using those functions will have the following: Input/output: > (get-all-best-pairs "hello" "blow") (("hello" "b_low") ("hello_" "b_l_ow") ("hello_" "b__low") ("hello" "_blow") ("hello_" "_bl_ow") ("hello_" "_b_low") ("hello_" "__blow"))  Would be really great, if I can see how this can be done. Also, in my functions that I have written above, I have made use of some built in Scheme functions like map, filter, append and apply. I also am aware, that the algorithm is extremely inefficient, and is of exponential complexity. That is not of a concern to me at this time. #### Scala deep type checking My purpose is to do deep type checking. To check all type arguments. For example, here I am using TypeTag: import scala.reflect.runtime.universe._ def check[T](a: T)(implicit tag: TypeTag[T]) = tag.tpe <:< typeTag[(String, List[Int])].tpe  And it seems to work fine: scala> check(("", List(""))) res0: Boolean = false scala> check(("", List(1))) res1: Boolean = true  But I am not sure if it is the right way. Also I think the implementation of equals in TypeTag seems strange:  override def equals(x: Any) = x.isInstanceOf[TypeTag[_]] && this.mirror == x.asInstanceOf[TypeTag[_]].mirror && this.tpe == x.asInstanceOf[TypeTag[_]].tpe  this.tpe == x.asInstanceOf[TypeTag[_]].tpe always returns false if I am using it in my check function. Also I am interested in common solution to make different type checking with one function. But I have no idea how to do that. ### Planet Clojure #### Optimal left to right pattern matching Automata in literate clojure Optimal left to right pattern matching Automata Hi, I am in the process of rewriting expresso's rule engine. Currently, rules in expresso can be very expressive, but the rule engine was written in the first two weeks of my gsoc project time on top of core.logic, so there are many efficiency issues with it. I did some research on how the rule engines from other term rewriting engines are build (like maude, elan, stragego ...). Following a few references, I came to this paper, which presents an algorithm to match an expression to multiple simultaneous patterns without backtracking in one pass over the expression, which is really cool and the perfect basis for a good rule engine/compiler. I implemented the algorithm in the paper and also build a rule compiler, that unrolls the automaton that is constructed into an efficient clojure function. It is a literate program, you can find it here. The rest of this post is the created html export from the literate org file. ## 1 Introduction This is a clojure implementation of the algorithm from this paper. The problem that this addresses is matching a term against multiple patterns simultaneously, without backtracking scanning the expression only one time from left to right. The patterns are assumed to be linear, that is that there are no two variables with the same name in a pattern. Testing for equality has to be done as an additional check when the linear pattern matched. To accomplish this, a directed acyclic graph representing a deterministic automaton is created from the pattern, with transitions labeled by the next symbol read during a left to right scan through the pattern and the currently matching patterns as nodes. The dag is kept minimal, that is there are no two states in the dag that produce the same matching sub-tree. I extended the algorithm in the paper to also work when there is a wildcard on a function symbol like in the following pattern: '(? a b) and also to handle functions with multiple arities. This adds a few extra cases to the interpreter and the compiler, but in the case it isn't needed doesn't slow down the matching process. Interpreting it works as expected - scan through the input expression, for each symbol follow the labeled transition if it exists - pick the default route if one exists in case that fails - fail otherwise - repeat until at failure state or the end of the expression is reached The dag can also be compiled to an optimized clojure function resembling the decision tree that the dag represents. Basically, the function consists of a bunch of (case <location-in-expression> <symbol1> <forward-location-and-check-next-input> …. <default> <go-through-default-route-if-possible>) thus eliminating the need to search through the dag at matching time. ### 1.1 Implementation (ns optimal-left-to-right-pattern-matching-automata.core (:require [clojure.set :as set] [clojure.walk :as walk] [clojure.zip :as zip])) We need a (meta-) symbol for a default transition. It will be called omega from now on (def omega '?) #### 1.1.1 Representing patterns Because we are concerned with scanning expressions from left to right, the matching positions of the patterns can be totally ordered - by how right they appear in the printed representation - and put in a single list. Function symbols are represented as [<function-symbol> <number-of-arguments>], so that the flat representation retains all information about the original structure of the pattern. For example, the pattern '(f (g a b) a a b) can be represented as '([f 4] [g 2] a b a a b). In this representation, a pattern is just a list of transition labels that the automaton must perform in order to match an expression against the pattern. During the matching, there will always be a current state which is all patterns with the same matching prefix, a current position where the next symbol will be read, and a suffix to be matched for the next symbols read. This is the definition of a matching-item. (defn matching-item "A matching item is a triple r:a*b where ab is a term and r is a rule label. The label identifies the origin of the term ab and hence, in a term rewriting system, the rewrite rule which has to be applied when ab is matched * is called the matching dot, a the prefix and b the suffix. The first symbol of b is the matching symbol. The position of the matching dot is the matching position" [r a b] [r a b])(defn matching-symbol [matching-item] (let [[r a b] matching-item] (first b)))(def infinity (Double/MAX_VALUE))(defn final? [matching-item] (let [[r a b] matching-item] (empty? b)))(defn matching-position [matching-item] (if (final? matching-item) infinity (let [[r a b] matching-item] (inc (count a)))))(defn initial-matching-item [label pattern] [label '() pattern]) The current state of the automaton is then a set of matching items which share the same prefix. (defn matching-set? [matching-items] (let [[r a b] (first matching-items)] (every? #(let [[r2 a2 b2] %] (= r r2)) (rest matching-items))))(defn initial-matching-set? [matching-items] (and (matching-set? matching-items) (= '() (second (first matching-items)))))(defn final-matching-set? [matching-items] (and (matching-set? matching-items) (= '() (nth (first matching-items) 2)))) #### 1.1.2 The Transition function for the automaton After we know what the states and the transitions of the automaton will be, we can start looking at the definition for the transition function delta. For more explanation, see the paper itself. Basically, from the current state - the current-matching-set - it returns as next node the set of matching items which could be forwarded by the symbol s - that is what the accept function does. It also avoids backtracking by adding more states when there is an ambiguity in the form that one pattern has a default next transition and another has a transition that goes a level deeper with a function symbol. If the function symbol transition would be followed, it could be that it failed and one had to backtrack and go through the omega transition. Therefore, for each such situation a new pattern is added to the matching set which consists of the omega rule but with the omega replaced by the next function symbol and a number of omegas that match the functions arity. It is also important to do this closing over the current matching set at the very beginning to handle the case of a default omega pattern. The paper fails to mention that. (defn forward-matching-position [matching-item] (let [[r a b] matching-item] [r (concat a [(first b)]) (rest b)]))(defn functions [matching-set] (into #{} (filter #(or (and (symbol? %) (not= omega %)) (and (sequential? %) (symbol? (first %)) (number? (second %)))) (map matching-symbol matching-set))))(defn arity [function-symbol] (or (and (sequential? function-symbol) (second function-symbol)) 0))(defn accept [matching-items s] (map forward-matching-position (filter #(= (matching-symbol %) s) matching-items)))(defn close [matching-items] (let [F (functions matching-items)] (set/union matching-items (for [matching-item matching-items function-symbol F :let [arityf (arity function-symbol)] :when (and (= omega (matching-symbol matching-item)))] (let [[r a b] matching-item] [r a (concat [function-symbol] (repeat arityf omega) (rest b))])))))(defn delta [matching-items s] (close (accept matching-items s))) #### 1.1.3 Creating the DAG 1. Graph implementation Here is a very simple implementation of a functional graph data structure ;;quick and dirty functional graph implementation(def empty-graph {})(defn add-node [g n] (if (g n) g (assoc g n {:next #{} :prev #{}})))(defn add-edge [g n1 n2 l] (-> g (add-node n1) (add-node n2) (update-in [n1 :next] conj [n2 l]) (update-in [n2 :prev] conj [n1 l])))(defn remove-edge [g n1 n2 l] (-> g (add-node n1) (add-node n2) (update-in [n1 :next] disj [n2 l]) (update-in [n2 :prev] disj [n1 l])))(defn remove-node [g n] (if-let [{:keys [next prev]} (g n)] ((comp #(dissoc % n) #(reduce (fn [g* [n* l*]] (remove-edge g* n* n l*)) % prev) #(reduce (fn [g* [n* l*]] (remove-edge g* n n* l*)) % next)) g) g)) 2. Recognizing equivalent states To make the created automaton minimal, equivalent states have to be recognized during the construction phase. Two states are equivalent, if for each item in set1 there exists an equivalent item in set2. Two matching items are equivalent, if they have the same rule label and the same suffix. (defn equivalent-matching-items? [matching-item1 matching-item2] (let [[r1 a1 b1] matching-item1 [r2 a2 b2] matching-item2] (and (= r1 r2) (= b1 b2))))(defn extract-first-by "returns [extracted rest-of-collection] or false" [f coll] (loop [[c & cs] coll rest-coll []] (if c (if (f c) [c (concat rest-coll cs)] (recur cs (conj rest-coll c))) false)))(defn equivalent-matching-sets? [matching-set1 matching-set2] (loop [[mit & mits] matching-set1 matching-set2 matching-set2] (if mit (if-let [[mit2 mits2] (extract-first-by #(equivalent-matching-items? mit %) matching-set2)] (recur mits mits2) false) (empty? matching-set2)))) 3. Constructing the DAG For detailed description about this algorithm, see the paper. Basically, we start with the initial-matching-set and create new states for all possible transitions, add the nodes and the edges to the graph, or only the transition if there already exists an equivalent state in the graph. Then sort the newly created states according to their matching position, so that states with only a few already matched items are handled first. The creation ends when the list of states is traversed completely. (defn failure? [state] (or (= '() state) (nil? state)))(defn get-next-node [g n l] (some #(and (= (second %) l) (first %)) (get-in g [n :next])))(defn search-equivalent-node [graph node] (first (for [[n v] graph :when (equivalent-matching-sets? node n)] n)))(defn insert-according-to-matching-position [nodes-to-visit new-matching-set] ;;nodes-to-visit has to be sorted according to matching-position ;;all matching positions in a matching set are the same (let [nmp (matching-position (first new-matching-set))] (loop [[n & ns :as nodes-left] nodes-to-visit new-nodes-to-visit []] (if n (if (<= (matching-position (first n)) nmp) (recur ns (conj new-nodes-to-visit n)) (vec (concat new-nodes-to-visit [new-matching-set] nodes-left))) (conj nodes-to-visit new-matching-set)))));;problem hier? gibt nur ein omega jetzt mehrere(defn create-new-states [pos nodes-to-visit graph] (let [current-state (nth nodes-to-visit pos) F (functions current-state)] (loop [[s & ss] (concat F [omega]) nodes-to-visit nodes-to-visit graph graph] (if s ;;work to do (let [new-matching-set (delta current-state s)] ;;check if there is already an equivalent matching-set in the graph (if-let [eq-node (search-equivalent-node graph new-matching-set)] (recur ss nodes-to-visit (add-edge graph current-state eq-node s)) (recur ss (insert-according-to-matching-position nodes-to-visit new-matching-set) (add-edge graph current-state new-matching-set s)))) ;;all symbols consumpted, so return the new state [graph nodes-to-visit]))))(defn create-dag [initial-matching-set] (loop [graph empty-graph nodes-to-visit [initial-matching-set] pos 0] (if (= (count nodes-to-visit) pos) ;;all nodes visited, so return graph (remove-node graph '()) (let [[new-graph new-nodes-to-visit] (create-new-states pos nodes-to-visit graph)] (recur new-graph new-nodes-to-visit (inc pos)))))) #### 1.1.4 Interpreting the DAG With the constucted minimal dag like described in the paper, we can leave it and now implement how to interpret that dag to match an expression against multiple paterns. To do this, we will traverse the expression from left to right using clojure zippers. We recursively check for the next transition, follow it and move the zipper forward accordingly and fail if there is no transition possible. If we go through a wildcard then we add the current value of the zipper location to the bindings ;;TODO may miss some bindings in rules created by close (defn consume-next [g current-state symbol] (let [next-state (get-next-node g current-state symbol)] (if (failure? next-state) ;;there was no link, so go through omega link [(get-next-node g current-state omega) [symbol]] [next-state []])))(defn consume-next-level-down [g current-state [symbol count]] (let [next-state (get-next-node g current-state [symbol count])] (if (failure? next-state) ;;there was no link, so go through omega link [(get-next-node g current-state [omega count]) [symbol]] [next-state []])))(defn- next-without-down [loc] (if (= :end (loc 1)) loc (or (zip/right loc) (loop [p loc] (if (zip/up p) (or (zip/right (zip/up p)) (recur (zip/up p))) [(zip/node p) :end])))))(defn match-expression [g patterns expression] (loop [loc (zip/seq-zip expression) node patterns bindings []] (if (or (failure? node) (zip/end? loc)) ;;done [node bindings] (if (zip/branch? loc) ;;ok try if head symbol matches ;;we are using preorder throughout matching (let [children-count (dec (count (zip/children loc))) head-loc (zip/next loc) [next-node add-bindings] (consume-next-level-down g node [(first head-loc) children-count])] (if (failure? next-node) ;;head got no match so we have to stay at the original level and try ;;to match there for a value or omega (let [[next-node add-bindings] (consume-next g node (first loc))] (recur (next-without-down loc) next-node (concat bindings add-bindings))) ;;head location got a match so we go on on this level (recur (zip/next head-loc) next-node (concat bindings add-bindings)))) ;;we have no possibility to go down a level deeper so we can just ;;consume directly (let [[next-node add-bindings] (consume-next g node (first loc))] (recur (zip/next loc) next-node (concat bindings add-bindings))))))) 1. Testing Here are a few sample calls and tests: (use 'clojure.test)(let [initial-matching-set (close [(initial-matching-item 1 '([? 2] a b)) (initial-matching-item 2 '([? 1] a)) (initial-matching-item 3 '(?))]) dag (create-dag initial-matching-set)] (is (= '[([3 (?) ()]) (1)] (match-expression dag initial-matching-set 1))) (is (= '[([3 ([? 1] a) ()] [2 ([? 1] a) ()]) (+)] (match-expression dag initial-matching-set '(+ a)))) (is (= '[([3 ([? 2] a b) ()] [1 ([? 2] a b) ()]) (+)] (match-expression dag initial-matching-set '(+ a b)))) (is (= '[([3 (?) ()]) ((+ a b c))] (match-expression dag initial-matching-set '(+ a b c))))) #### 1.1.5 Compiling the DAG to a fast clojure function The expression matching can be taken a level further, to the point that the dag can be compiled to a fast clojure function. The resulting clojure function will look like this: (fn [expression] (let [loc (zip/seq-zip expression)] ;;now code for the single transitions (or ;;if there are possible transitions in the dag that lead one ;;level down - if now the next part is replaced by false ;;and the next branch of the or is taken (and (zip/branch? ~'loc) ;;fail if we are not in a branch (let [head-loc (zip/next loc)] (case (first head-loc) ;;fast dispatch on the function symbol <function-symbol> (and (check-if-argument-count-matches) <code-for-the-next-transitions>) #_ (...) ;;default case <code-for-wildcard-transition or nil if no wildcard>))) ;;if there is no matching transition for the current head symbol ;;try matching the whole current subtree (case (first loc) <variable-or-constant> <code-for-next-transition> #_ (...) <code-for-wildcard-transition or nil if there is no wildcard>)))) In the end-nodes of the decision tree the code returns either nil for a failure node or sorts the applicable rules by priority (currently only their label but one could introduce the rule that more specific rules come first) and for each defines the bindings, checks the conditions and returns their result. Therefor, we now extend the notion of a pattern to the notion of a rule. Currently this is really low level and the rule engine on top if this should take a more human readable form. A rule has the form [<label> <pattern> <conditions> <results> <wildcard-positions>] label and pattern are the same as before, conditions is just a list of expressions to evaluate after succesful match, result is the rhs of the rule and wildcard-positions maps the wildcards in the pattern to the positions in the expression. With this the compile-rules function can be defined (defn get-in-expression [expression key] (loop [loc (zip/seq-zip expression) [k & ks] key] (if k (let [new-loc (loop [k k loc (zip/down loc)] (if (> k 0) (recur (dec k) (zip/right loc)) loc))] (recur new-loc ks)) (first loc))))(defn compile-step [g current-state rule-map] (let [possible-moves (doall (map last (:next (get g current-state)))) head-moves (doall (filter sequential? possible-moves)) current-level-moves (doall (remove sequential? possible-moves))] (if (empty? possible-moves) (and (zip/end? ~'loc) ;;current-state was successfully matched. Now get the results for the ;;matched rules in current-stater (or ~@(for [[label & rest] (sort-by first (filter final? current-state)) :let [[conditions result omga-positions] (get rule-map label)]] (let [~@(for [[name pos] omga-positions entry [name (get-in-expression ~'expression ~pos)]] entry)] (and ~@(concat conditions [result])))))) (or ~(if (empty? head-moves) 'false ;;have to test going a level deeper (and (zip/branch? ~'loc) (let [~'head-loc (zip/next ~'loc)] (case (first ~'head-loc) ;;now all next steps have to be written down in a ;;case - the right hand side will be a recursive ;;call to create the code at the next level ;;the default of case is either nil or the level ;;from following a [? <number>] label in the graph ~@(concat (for [[s c] head-moves :when (not= omega s) entry [s (and (= (dec (count (zip/children ~'loc))) ~c) (let [~'loc (zip/next ~'head-loc)] ~(compile-step g (get-next-node g current-state [s c]) rule-map)))]] entry) [(let [omega-downs (filter #(= (first %) omega) head-moves)] (case (dec (count (zip/children ~'loc))) ~@(concat (for [[omga c] omega-downs entry [c (let [~'loc (zip/next ~'head-loc)] ~(compile-step g (get-next-node g current-state[omega c]) rule-map))]] entry) ;;no further defaulting possible - fail '(nil))))]))))) (case (first ~'loc) ~@(concat (for [symbol current-level-moves :when (not= omega symbol) entry [symbol (let [~'loc (next-without-down ~'loc)] ~(compile-step g (get-next-node g current-state symbol) rule-map))]] entry) [(if (some #{omega} current-level-moves) ;;we have a default case to fall back to (let [~'loc (next-without-down ~'loc)] ~(compile-step g (get-next-node g current-state omega) rule-map)) 'nil)]))))))(defn compile-rules [rules] (let [res (for [[label pattern conditions result omga-positions] rules] [(initial-matching-item label pattern) [label [conditions result omga-positions]]]) initial-matching-set (close (map first res)) rule-map (into {} (map second res)) dag (create-dag initial-matching-set)] (fn [~'expression] (let [~'loc (zip/seq-zip ~'expression)] ~(compile-step dag initial-matching-set rule-map))))) 1. Tests with example rules Here are two example rules: (f a a ?a a) => ?a (f (g a ?b) a ?b a) => ?b Encoded in the current low-level representation they become [[1 '([f 4] a a ? a) [] '?a '{?a [3]}] [2 '([f 4] [g 2] a ? a ? a) '[(= ?a ?b)] '?b '{?b [1 2] ?a [3]}]] Here are the corresponding tests: (let [rules [[1 '([f 4] a a ? a) [] '?a '{?a [3]}] [2 '([f 4] [g 2] a ? a ? a) '[(= ?a ?b)] '?b '{?b [1 2] ?a [3]}]] f (eval (compile-rules rules))] (is (= 'c (f '(f (g a c) a c a)))) (is (not (f '(f (g a b) a c a)))) (is (= 'a (f '(f a a a a)))) (is (not (f '(f a a a b))))) 2. Example code The compiled code for the two rules above looks like this: (clojure.core/fn [expression] (clojure.core/let [loc (clojure.zip/seq-zip expression)] (clojure.core/or (clojure.core/and (clojure.zip/branch? loc) (clojure.core/let [head-loc (clojure.zip/next loc)] (clojure.core/case (clojure.core/first head-loc) f (clojure.core/and (clojure.core/= (clojure.core/dec (clojure.core/count (clojure.zip/children loc))) 4) (clojure.core/let [loc (clojure.zip/next head-loc)] (clojure.core/or (clojure.core/and (clojure.zip/branch? loc) (clojure.core/let [head-loc (clojure.zip/next loc)] (clojure.core/case (clojure.core/first head-loc) g (clojure.core/and (clojure.core/= (clojure.core/dec (clojure.core/count (clojure.zip/children loc))) 2) (clojure.core/let [loc (clojure.zip/next head-loc)] (clojure.core/or false (clojure.core/case (clojure.core/first loc) a (clojure.core/let [loc (optimal-left-to-right-pattern-matching-automata.core/next-without-down loc)] (clojure.core/or false (clojure.core/case (clojure.core/first loc) (clojure.core/let [loc (optimal-left-to-right-pattern-matching-automata.core/next-without-down loc)] (clojure.core/or false (clojure.core/case (clojure.core/first loc) a (clojure.core/let [loc (optimal-left-to-right-pattern-matching-automata.core/next-without-down loc)] (clojure.core/or false (clojure.core/case (clojure.core/first loc) (clojure.core/let [loc (optimal-left-to-right-pattern-matching-automata.core/next-without-down loc)] (clojure.core/or false (clojure.core/case (clojure.core/first loc) a (clojure.core/let [loc (optimal-left-to-right-pattern-matching-automata.core/next-without-down loc)] (clojure.core/and (clojure.zip/end? loc) (clojure.core/or (clojure.core/let [?b (optimal-left-to-right-pattern-matching-automata.core/get-in-expression expression [1 2]) ?a (optimal-left-to-right-pattern-matching-automata.core/get-in-expression expression [3])] (clojure.core/and (= ?a ?b) [?a ?b]))))) nil)))))) nil)))))) nil)))) (clojure.core/case (clojure.core/dec (clojure.core/count (clojure.zip/children loc))) nil)))) (clojure.core/case (clojure.core/first loc) a (clojure.core/let [loc (optimal-left-to-right-pattern-matching-automata.core/next-without-down loc)] (clojure.core/or false (clojure.core/case (clojure.core/first loc) a (clojure.core/let [loc (optimal-left-to-right-pattern-matching-automata.core/next-without-down loc)] (clojure.core/or false (clojure.core/case (clojure.core/first loc) (clojure.core/let [loc (optimal-left-to-right-pattern-matching-automata.core/next-without-down loc)] (clojure.core/or false (clojure.core/case (clojure.core/first loc) a (clojure.core/let [loc (optimal-left-to-right-pattern-matching-automata.core/next-without-down loc)] (clojure.core/and (clojure.zip/end? loc) (clojure.core/or (clojure.core/let [?a (optimal-left-to-right-pattern-matching-automata.core/get-in-expression expression [3])] (clojure.core/and ?a))))) nil)))))) nil))) nil)))) (clojure.core/case (clojure.core/dec (clojure.core/count (clojure.zip/children loc))) nil)))) (clojure.core/case (clojure.core/first loc) nil)))) Created: 2014-04-18 Fri 12:51 Emacs 24.2.1 (Org mode 8.2.5g) ### TheoryOverflow #### PTAS for Multidimensional Knapsack for unfixed dimension There are several PTAS for Multidimensional Knapsack with running time O(n^{d/e}), where d is the dimension and 0 < e < 1. This are only PTAS when d is assumed to be fixed. Are there PTAS for Multidimensional Knapsack for unfixed dimension? ### CompsciOverflow #### Studying Skiena. War Story: What’s Past is Prolog I am reading The Algorithm Design Manual, 2nd Edition. I am the "What’s Past is Prolog" war story. This war story is available on the web here I do not follow this statement: Since the rules were ordered, each node in the subtree must represent the root of a run of consecutive rules, so there were only{{n}\choose{2}}$possible nodes to choose from for this tree... and this one neither: The rules in each run must be consecutive, so there are only${{n}\choose{2}}$possible runs to worry about. My questions: How does the fact the rules are consecutive leads to the inference about the number of nodes or runs (which is${{n}\choose{2}}$)? Seems I'm missing some intermediate reasoning steps. ### StackOverflow #### Conflicting cross-version suffixes in: org.scalamacros:quasiquotes I am trying to use scala-pickling in one of my projects. I tried to mimic the build file of macroid which seems to use pickling too but I keep getting this error on sbt test: [error] Modules were resolved with conflicting cross-version suffixes in dijon: [error] org.scalamacros:quasiquotes _2.10, _2.10.3 java.lang.RuntimeException: Conflicting cross-version suffixes in: org.scalamacros:quasiquotes at scala.sys.package$.error(package.scala:27)
at sbt.ConflictWarning$.processCrossVersioned(ConflictWarning.scala:47) at sbt.ConflictWarning$.apply(ConflictWarning.scala:30)
at sbt.Classpaths$$anonfun61.apply(Defaults.scala:1044) at sbt.Classpaths$$anonfun$61.apply(Defaults.scala:1044)  Full build log is here. What am I doing wrong? What should I change in my build.sbt to fix this? I also should be able to cross compile and release my library against both 2.10.x and 2.11.x. #### Spark runs out of memory when grouping by key I am attempting to perform a simple transformation of common crawl data using Spark host on an EC2 using this guide, my code looks like this:  package ccminer import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ object ccminer { val ENGLISH = "english|en|eng" val SPANISH = "es|esp|spa|spanish|espanol" val TURKISH = "turkish|tr|tur|turc" val GREEK = "greek|el|ell" val ITALIAN = "italian|it|ita|italien" val ALL = (ENGLISH :: SPANISH :: TURKISH :: GREEK :: ITALIAN :: Nil).mkString("|") def langIndep(s : String) = s.toLowerCase().replaceAll(ALL,"*") def main(args : Array[String]) { if(args.length != 3) { System.err.println("Bad command line") System.exit(-1) } val CLUSTER="spark://???" val sc = new SparkContext(CLUSTER, "Common Crawl Miner", System.getenv("SPARK_HOME"), Seq("/root/spark/ccminer/target/scala-2.10/cc-miner_2.10-1.0.jar")) val data = sc.sequenceFile[String,String](args(0)) data.map({ case (k,v) => (langIndep(k), v) }).groupByKey(args(2).toInt).filter({ case (k, vs) => vs.size > 1 }).saveAsTextFile(args(1)) } }  And I am running it with the command as follows: sbt/sbt "run-main ccminer.ccminer s3n://aws-publicdatasets/common-crawl/parse-output/segment/1341690165636/textData-* s3n://parallelcorpus/out/ 2000"  But very quickly it fails with errors as follows java.lang.OutOfMemoryError: Java heap space at com.ning.compress.BufferRecycler.allocEncodingBuffer(BufferRecycler.java:59) at com.ning.compress.lzf.ChunkEncoder.<init>(ChunkEncoder.java:93) at com.ning.compress.lzf.impl.UnsafeChunkEncoder.<init>(UnsafeChunkEncoder.java:40) at com.ning.compress.lzf.impl.UnsafeChunkEncoderLE.<init>(UnsafeChunkEncoderLE.java:13) at com.ning.compress.lzf.impl.UnsafeChunkEncoders.createEncoder(UnsafeChunkEncoders.java:31) at com.ning.compress.lzf.util.ChunkEncoderFactory.optimalInstance(ChunkEncoderFactory.java:44) at com.ning.compress.lzf.LZFOutputStream.<init>(LZFOutputStream.java:61) at org.apache.spark.io.LZFCompressionCodec.compressedOutputStream(CompressionCodec.scala:60) at org.apache.spark.storage.BlockManager.wrapForCompression(BlockManager.scala:803) at org.apache.spark.storage.BlockManager$$anonfun5.apply(BlockManager.scala:471) at org.apache.spark.storage.BlockManager$$anonfun$5.apply(BlockManager.scala:471)
at org.apache.spark.storage.DiskBlockObjectWriter.open(BlockObjectWriter.scala:117)
at org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:174)
at org.apache.spark.scheduler.ShuffleMapTask$$anonfunrunTask1.apply(ShuffleMapTask.scala:164) at org.apache.spark.scheduler.ShuffleMapTask$$anonfun$runTask$1.apply(ShuffleMapTask.scala:161)

#### Using Elimination Theory to construct Rigid Matrices. (arXiv:0910.5301v3 [cs.CC] UPDATED)

The rigidity of a matrix A for target rank r is the minimum number of entries of A that must be changed to ensure that the rank of the altered matrix is at most r. Since its introduction by Valiant (1977), rigidity and similar rank-robustness functions of matrices have found numerous applications in circuit complexity, communication complexity, and learning complexity. Almost all nxn matrices over an infinite field have a rigidity of (n-r)^2. It is a long-standing open question to construct infinite families of explicit matrices even with superlinear rigidity when r = Omega(n).

In this paper, we construct an infinite family of complex matrices with the largest possible, i.e., (n-r)^2, rigidity. The entries of an n x n matrix in this family are distinct primitive roots of unity of orders roughly exp(n^2 log n). To the best of our knowledge, this is the first family of concrete (but not entirely explicit) matrices having maximal rigidity and a succinct algebraic description.

Our construction is based on elimination theory of polynomial ideals. In particular, we use results on the existence of polynomials in elimination ideals with effective degree upper bounds (effective Nullstellensatz). Using elementary algebraic geometry, we prove that the dimension of the affine variety of matrices of rigidity at most k is exactly n^2-(n-r)^2+k. Finally, we use elimination theory to examine whether the rigidity function is semi-continuous.

### StackOverflow

#### Way to mention null-parameter in documentation?

I am developing a kind of API and wondering which way would be generally preferred to mention about null-parameter:

A. Write such like @throw NullPointerException if P is null assuming all method whose doc says nothing about it will accept null-parameter:

/**
/* Does something...
/* @param p the paramteter to do something...
/* @throws NullPointerException if p is null
...


B. Write such like this parameter can be null assuming null-parameter is not accepted without any mention about it:

/**
/* Does something...
/* @param p the parameter... this can be null
...


In common sense I feel like A is more legit, but it's actually painful to write about it all the time.

Which way would you choose?

Thank you!

#### sbt test:doc Could not find any member to link

I'm attempting to run sbt test:doc and I'm seeing a number of warnings similar to below:

[warn] /Users/tleese/code/my/stuff/src/test/scala/com/my/stuff/common/tests/util/NumberExtractorsSpecs.scala:9: Could not find any member to link for "com.my.stuff.common.util.IntExtractor".

The problem appears to be that Scaladoc references from test sources to main sources are not able to link correctly. Any idea what I might be doing wrong or need to configure?

Below are the relevant sections of my Build.scala:

val docScalacOptions = Seq("-groups", "-implicits", "-external-urls:[urls]")

scalacOptions in (Compile, doc) ++= docScalacOptions
scalacOptions in (Test, doc) ++= docScalacOptions
autoAPIMappings := true
`

### CompsciOverflow

#### About computer science and category theory [duplicate]

This question already has an answer here:

I read that Category Theory has alot to do with how programs and information can be organised.Can Category theory simplify various programming strategies? If a specific Category is represented as a directed graph is this similar to flow charts used in programming?

### /r/compsci

#### Building trees from preorder and inorder list of values.

I am to draw out the tree that results from these traversals. I thought I had a good grasp on these traversals as it does not seem that complicated after looking at the wikipedia page. I do not understand how these are valid trees unless they are very off centered from the root. Could someone give me some guidance and maybe explain how these start to build? Thanks

Pre Order: 14, 24, 34, 20, 36, 42, 22, 39, 27, 18, 5, 17, 11, 7, 15

In Order: 20, 34, 36, 24, 22, 42, 39, 14, 5, 18, 17, 27, 7, 11, 15

submitted by rezaw