View source for Talk:Wave Surfing

From Robowiki
Jump to navigation Jump to search

The Next Level

Quite a while ago I had an idea that could bring Wave Surfing to a new level, but I never got around to implementing it. Since the wiki's been quiet the last few days, I thought perhaps something like this might provide a kick to get things rolling again. My idea is Wave Surfing with a twist: not only do you surf the wave(s) that have been fired, but you also take into consideration the waves that have not yet been fired. By predicting your movement, you can predict what the enemy will see, and thus by looking at your movement stats you can see where they are likely to fire. Now, for the interesting part: by varying where you go, not only do you vary what GFs you reach, but you also vary what the enemy sees, and thus where they shoot. Danger can be calculated not only as a low point on a graph, but also on the entropy of the enemy's firing stats when presented with this movement. The more peaked the profile, the better this movement option (provided we can get to a low point). A system like this could basically 'figure out' how to do a movement type like Stop And Go, and probably other amazing types of movement we haven't thought of yet. There! How's that for some food for thought? =) --Skilgannon 20:23, 9 July 2009 (UTC)

Some interesting thoughts here. I'm initially skeptical (as usual =)) because it reminds me of the diminishing returns on surfing waves beyond the first couple. But ignoring that... One hurdle with this is that while you can predict your own movement, you can't (accurately) predict the enemy's movement, which also impacts what they "see" -- distance, lateral velocity, wall distance, etc. depend on enemy location as well as your own. It's a funny thought to use your gun to predict their movement to assist in your own movement. =)

So let's say you can guesstimate the enemy movement -- and maybe the full range of their movement options are pretty similar for our purposes anyway. So for each of my movement options, I'd figure out if/when the enemy would fire (based on gun heat) during that prediction, check my stats of the enemy's bullets fired / bullets hit, factor in the precise max escape angle of that firing situation, and multiply the probability he would hit into the danger calculation. Kinda similar to how many of us factor in distance (and maybe instead of distance, since these two factors would not be independent). I'm still skeptical, but this sounds really interesting!

Good to see I'm not the only one still pondering the next big thing. =) I still want to find a breakthrough in Anti-Surfer Targeting... I know it's there!

--Voidious 20:52, 9 July 2009 (UTC)

Gaff's anti-surfer score in the TC2K7 isn't enough? I have a dev. version that scores 82.93 against surfers (15 seasons) but worse overall once the random movers are averaged in. If you create a new Anti-Surfing Challenge I'll be happy to keep looking for improvements. =) --Darkcanuck 21:49, 9 July 2009 (UTC)
Hmm, those are some pretty sick scores. Besides the scores themselves, it's particularly impressive that you nail Shadow and RaikoMX so well with one gun. I'm guessing not by the description, but do you simulate enemy surf stats in any way? That's the main area where I think a breakthrough might await, but even my best efforts haven't yielded much. That Anti-Surfing Challenge sounds pretty appealing, actually, even if it will do squat in closing the gap to DrussGT. =) --Voidious 00:34, 10 July 2009 (UTC)
Thanks, I'm very pleased with that gun. There's no simulation involved, besides a rough calculation of the min + max reachable GFs -- the rest is all NN magic. I promise to write it up one of these days, I'd like to see what tangents others might take. --Darkcanuck 03:58, 10 July 2009 (UTC)
I was once tried simulating the surfer stats, but the result with normal bin smoother will have a load of empty bins that have no data. So you can't really decide where to fire unless, of course, you are simulating a surfer movement too. But next generation Go-To surfer can kill that instantly. Note the next generation go-to surfer is not this next generation wave surfer, it will be like when you may chose a point that close to you, and stay still until the last moment and move to that spot in time when the wave reach or something more that that.
We can have anti-surfer challenge in Challenge 2K9, I think we should have one. I'll continue working on Challenge 2K9 when I have time (October, when the school holiday arrive.) » Nat | Talk » 05:15, 10 July 2009 (UTC)
Yep, you probably shouldn't use only enemy surf data to shoot at. But there's important data there that I really think can be factored into the gun. I'd definitely account for what GFs they can still reach on each wave (ie, the "simulate their movement" part). I don't think an Anti-Surfer Challenge needs to be a subchallenge of another challenge, since most of our movement and targeting challenges aim to gauge rumble performance, while this one clearly would not. --Voidious 13:44, 10 July 2009 (UTC)
If you look closely, the Challenge 2K9 is main challenge plus a bunch of subchallenges. I think most Anti Surfer gun indirectly simulating enemy surf stats already. Since we usually reduce twice on hit, it would result like the simulated enemy surfing stats plus normal stats to decide where to shot in a load of empty bins. However, we can assume that the enemy will not changes its direction until one wave passed over. Then we may assume which direction the enemy will go next, etc. » Nat | Talk » 14:47, 10 July 2009 (UTC)

Fiddlesticks, this was going to be my killer idea once I made a wavesurfing bot!...Creating misleading spikes--CrazyBassoonist 21:10, 9 July 2009 (UTC)

Approaching this from a slightly different direction, you could "snap" your movement to certain points when you know they're about to fire, essentially reducing the resolution of their segmentation. In a single turn you can do that with velocity, snapping it to 8, 5, 2, 0, -2, -5, or -8. In two turns you can also keep acceleration constantly at 0. With a few more turns' warning you could limit velocity to even fewer values. This would be easiest against bots that fire slow, infrequent shots since your movement would be disrupted less often. -- Synapse 23:29, 9 July 2009 (UTC)

Interesting... for a simple movement (stop&go, flat, singlebuffer flattenerless wavesurf), it is possible to predict a reasonable accurate location in future. Thus it needs a lot of processing so I don't sure if it will run in time. But it may make up to 100 rounds to learn how they move. I never tried (but I'm writing a library that do this in past 2 months. It's call BattlePredictor, originally I will use it for melee)

One thing that should be consider when creating a wave surfing robot that use this generation, it should be totally useless against a simple gun and probably loss score (I'm highly doubt that using this may lead to another performance enhancing bug). » Nat | Talk » 01:28, 10 July 2009 (UTC)

I think you guys might have gone off on a tangent here... I don't think Skilgannon's idea really involves "creating misleading spikes" so much as factoring in which movement profile will be presented to the enemy and trying to choose an advantageous one. A best guess at the enemy's movement would be simple and might work well enough in most situations, so it need not be slow. It would be augmenting normal surfing instead of replacing it, so you shouldn't lose score against anyone. I like the idea of snapping velocity, Synapse; I think you might gain performance against simple learning guns but lose performance against bigger guns (which have fancier segments / attributes that could "see through" the trick). --Voidious 13:44, 10 July 2009 (UTC)

Still if we augment it, not replace, the augmented result can make you drive into danger with some type of gun. I don't know that it will happen, but I highly doubt (with some hunch though) that this will not improved the score. But it doesn't mean that I'll not tried to do it. Synapse, if you limit the acceleration to zero, wouldn't it just goes back and forth? I know that velocity is the most popular segment, if distance isn't. But I don't think there is many guns that use n ticks acceleration (I know there is some bot use distance in last few/several turns). » Nat | Talk » 14:47, 10 July 2009 (UTC)

Nat, the velocity and acceleration limitations would only be applied for two turns starting immediately before gunheat zero, so the enemy is forced to put all his battle data into a small number of segmentations. -- Synapse 16:19, 10 July 2009 (UTC)

The reason I think this might be a big improvement over regular surfing is against the simple targeters: before we were just acting reactively, responding to the enemy bullets in the air, whereas now we could act pro-actively, by actually forcing bots to shoot where we want them to. This would eliminate all the 'lucky shots' that are the usual cause of being hit, due to not having enough room to manoeuvre (from walls) or being too close to dodge. Against other surfers, this bot could have a large advantage if it stayed at close distances, where it can still predict enemy shots a large number of ticks ahead, but the enemy would only have distance/bulletVelocity ticks to work with. Then there's the false spikes that were mentioned: I don't think these are really feasable, as the only way to make them is to create real spikes, and the more predictable stuff you do, the worse =) --Skilgannon 16:15, 10 July 2009 (UTC)

Some questions moved to User_talk:Starrynte#Wave_Surfing_Questions


For me, there is no 'panacea' that is technically possible. I'd like to be able to get a copy of the enemy classes and just simulate what they'll do from there. If not, then have some algorithm which adapts to act exactly the same as them. Unfortunately, the first is against robocode rules, and the second is essentially a perfect DC movement/gun, which defeats the entire point of statistics, which is that there will be some situations where your rules are wrong. As such, I settle for statistics as a 'next best' option. I've often thought that during the first 1 or 2 rounds against a learning bot the flattener should be enabled. However, it usually takes 1 or 2 rounds to determine whether the enemy is learning or not, so that is a bit of a moot point. Maybe assume some common ones, like HOT linear and circular, and from there just say 'if any bullet arrives which is not one of these, enable the flattener'. The trouble for me with this is that you are then specializing your bot against this specific rumble set, and not all bots which could be written, in general. --Skilgannon 08:13, 18 May 2010 (UTC)

Ah, hmm. I think there is at least one large leap left in surfing stats. Our current surfers generally "learn what the enemy gun has learned" instead of "learning how the enemy gun learns". Against most learning guns, where a typical flattener will hurt your score, I still think there is a safe way to predict/dodge firing angles that you've never been hit at before... --Voidious 12:42, 18 May 2010 (UTC)

Well, "learning how the enemy gun learns" is essentially what a pure flattener does, when facing a GF-based gun. The only times it wouldn't be advantageous could be: 1) Extremely different 'categorization' of the situation (i.e. Pattern matching, or rather esoteric segmentation), or 2) Different prediction mechanism (i.e. PIF). Now, I don't think #2 makes a huge difference to the mostly-orbital surfer, so the key lies in #1. My current dream of what would be good, would be having a "bank" of different flatterers, which categorize the situation in highly different ways (not just subtle segmentation changes), and then whenever the bot does get hit, see which flatterers expected that spot to be dangerous and rate them higher. Really, the enemy firing predictors in this group don't have to be 'flatteners' strictly speaking. Just make them a set of "enemy gun simulators" really and include the non-learning ones in the same infrastructure, perhaps include the traditional 'surfer' learning in the set as well, and let the best predictor of actual danger win. The idea is, rather than learn from what the enemy gun has done before, try to match it up with a "likely approximately correct hypotheses". Given how targeting has a 1-dimensional result, and the amount of solid data is sparse for normal surfing, I think picking the best matching predictor out of a set is reasonable. Note, I don't mean the predictor with the 'profile' matching the surfing data the closest, that wouldn't be accurate because the enemy fires at the spot they thing is most likely, not with a distribution matching the curve they see. I mean seeing which one(s) best predicted the hits/misses --Rednaxela 14:04, 18 May 2010 (UTC)

Well, I kinda remember I said this sometime ago and you object that movement doesn't have enough data to learn like this... See some old pages for BlackHole v2 (which mostly abandoned) it have something like this, only not this explicit. --Nat Pavasant 16:19, 18 May 2010 (UTC)

When I said "learning how the enemy gun learns", I didn't just mean to learn like a gun. I mean to actually learn what their learning method is and what attributes they are keying off of. So a modern flattener doesn't do that - it assumes they're learning a certain way and simulates it. I agree you might have different classes of flatteners/simulators. And yeah, there just isn't enough bullet hit data for us to get much smarter with projecting dangers from them alone. Ok, I was going to keep this close to my chest in case it works out, but here's some of my current ideas:

  • Log virtual waves along with real waves. (The enemy is learning a lot from virtual waves and most of us don't even log them...)
  • When you get hit, you know the firing angle they used. You can convert this to a GuessFactor. There are tiny discrepancies involving which tick's abs bearing they used, or if their gun finished turning, or some MiniBots hard-code the MEA, but not a whole lot of permutations. Abs bearing from fireTime - 1, position from fire time, and MEA = Math.asin(8/bullet speed) covers a ton of bots.
    • I bet we could even pretty easily figure out how many bins they use.
    • And if their GFs don't align with evenly spaced bins, maybe that's a sign it's DC or PIF.
  • Look up all previous situations where you visited this GF.
  • Compare this firing situation to the previous ones and try to figure out what they have in common. Ideally, a handful of attributes look very similar.
  • Customize a flattener and start flattening. You can keep tuning it on the fly.

Now, if you are consistently hit with unlikely GFs or ones you've never visited, they're not using GFs. Maybe you can figure out they're doing PM and try APM. If you start with "regular" surfing, you can just disable all this magic until 3rd round and their hit rate passes 3% - we already dodge simple targeters well enough. We should probably move this discussion to Talk:Wave_Surfing#The_Next_Level or something... --Voidious 14:54, 18 May 2010 (UTC)

Began tinkering with "which attributes is he using" yesterday and realized some things. The main issue is that finding that an attribute is similar doesn't necessarily mean the enemy is using it to aim, it just means he should be. =) If I analyze visits and bullet hits separately, the same attributes are most similar in both. If I compare them, that might be something, but the ratios all come out very close to even. I think the fact that I'm surfing these attributes (or similar ones) means I will not leave really big traces to one attribute across the many times I've visited a certain GF, no matter how the enemy is aiming. --Voidious 02:14, 20 May 2010 (UTC)
I also thought through it. In my mind it would more be an adaptive weighting system. I think the trouble you are encountering is in your "compare this firing situation to the previous one and try to figure out what they have in common". My idea is that the algorithm would be something like: 1) extract N (N = large) scans that are near to our current scan in certain dimensions we're fairly sure they're using (eg latvel, accel, distance) 2) take K previous scans from this cluster with the nearest GF to the current bullet we know about 3) look for any dimensions that on average have a low distance from our current bullet 4) set the weighting to be such that this cluster would be chosen (more or less) ie. dimensions with low distance have high weight, and vice versa. Your thoughts? --Skilgannon 10:01, 20 May 2010 (UTC)
Yeah, dynamic weighting is another possible application, I'm just not convinced there are many points there. I suppose my goal could be framed as dynamic weighting of the flattener dimensions, at least as a starting point. Using common dimensions to narrow down the search is an interesting thought, but I think we'd still find lots of similar dimensions even if the enemy isn't using them. But I will try some more experiments soon. (So far I'm just printing data against bots and comparing to actual segmentation.)
Normalization is a big issue here, too, since different attributes have different distributions. Right now I'm using sqrt(average squared difference) of each dimension, then dividing by the standard deviation of that dimension. --Voidious 16:33, 20 May 2010 (UTC)

I think I remember Skilgannon said that if we are precise enough, we should be able to hide between bins of like 21-bins GF gun.

I don't think that the it can be precise enough to check for bins. A lot of Robot doesn't wait for their gun to align before fire; this along with high numbered bins (I think Doki's 59 bins is high enough) should be impossible to precisely detect this. But the method of detecting enemy's segmenting attribute is interesting. But they could look similar by coincidence, especially robots that use huge buffers (Doki, Phoenix, etc.) or weight buffers (WaveSerpent, etc.) But really, the gun alignment could thrown this off easily. Sometimes the unaligned gun do better, because the 'unexpected' shot(s) will give noise in enemy's stats.

It may be just me, but I really think that optimize for vast majority of the RoboRumble doesn't seems right. I mean, yes it will boost your score, but, well, you should be able to... err... I don't know... It's confusing in me right now (well, I am confused and stressed right now, given the state of my country...) --Nat Pavasant 16:19, 18 May 2010 (UTC)

Optimizing for the rumble population generally strikes me as "not truly intelligent", but at the same time, "there are only so many ways to skin a cat." I find it likely that any group of Robocoders would arrive at pattern matching, bearing offsets, and something resembling GuessFactors and PIF. So I don't feel like detecting those styles and special casing them is necessarily only specializing for the rumble. --Voidious 17:12, 18 May 2010 (UTC)

Entering the discussion kind of late, I would like to throw out an idea here (it's probably not original): basically the idea is to use your gun against yourself to determine where not to go. Your gun tells where it expects you to be fired upon and your movement code will then avoid it. This should help in movement as your bot is now essentially avoiding two guns,the enemy's and now your own gun. --Khanguy 00:20, 28 March 2011 (UTC)

That's pretty much what a flattener does. (I know that's a red link - someone should write it up =).) In addition to marking danger where you get hit (normal surfing), you mark danger at every spot you visit, even for bullets that miss you, to simulate how the enemy gun is learning. The catch is that against all but the most powerful guns, this can really hurt your score, so you have to be really careful about your decision to enable this mode. --Voidious 14:52, 28 March 2011 (UTC)
Actually this reminds me, RaikoMX originally worked by firing its own gun against itself and dodging places that its bullets would likely be. That is technically the same as surfing, but more so. This is more akin to using multiple surf guns/buffers with different weights/segmentations (if valid). Many bots already do this. — Chase-san 15:14, 29 March 2011 (UTC)

The "Strategy" section

Interesting section Ceasar. Mind if I rename it so something more like "A Game Theory Perspective on Wavesurfing"? I say that because that seems to be more what it is, plus I wouldn't say wave surfing inherently "employs ideas from game theory". Perhaps more like "ideas from game theory can be applied to wave surfing"?

Now that I've gotten what bothers me about section out... I really thing it's interesting to see this looked at from a game theory perspective. The general concepts here have been explored a fair bit in terms of how one tries to "flatten" a movement profile, and the "arms race" of both movement and targeting exploiting the uneven probabilities of eachother. If there were only wave at a time, then the ultimate "nash equilibrium" from the movement side, would simply be a movement that chose a uniform random value for the guessfactor to get to after calculating the range it can achieve. The ultimate "nash equilibrium" gun would be the same, in terms of having an equal chance of aiming anywhere where the opponent could go. Though it was never referred to in game theory terms, it was well known.

The reality is more complicated though, even for the simplified case of the nash equilibrium. In reality there is most often multiple waves in flight at any one time, and if you respond uniformly to the nearest wave, your movement profile will quickly start to approximate a gaussian distribution as the number of waves increases. This is due to movement choices in one wave limiting the movement choices in the next wave. Clearly, this has a movement weakness in that there will start to be an angles that are significantly better to aim at than others. Perhaps an interesting area to explore would be the game theory of the multiple-wave-at-a-time case? It seems to me that would be a trickier puzzle and likely vary significantly depending on more specifics of the situation. I'm rather sure that the 'ideal' route to maximize flatness of movement profile in the long run, would have to rely on past history of movement, and thus will end with short-term weaknesses, so I don't think simply optimizing for long-term movement profile flatness would quite be the nash equilibrium. So if optimizing for long-term flatness is not the equilibrium, and neither is picking a random angle each wave, then I suspect the true nash equilibrium would be somewhere inbetween.

This also reminds me of some anecdotal observations, such as how it's generally been found that even against strong targeting, it's rarely best to use solely use a flattener.

--Rednaxela 03:42, 25 March 2011 (UTC)

Ah, that's all extremely interesting! To begin, don't hesitate to edit my edits- that's the whole point of a wiki of course! I'm just trying to put the ideas I have out there and hope to get some insight from the veterans in the process (as in, this :P).

Going onwards, very interesting to hear that game theory hadn't been tossed around before. Seems like a very natural fit, particularly given the use of bins. I suppose given that, I'll try to flesh out the ideas with some more pages on game theory ideas, and specifically as they apply to Robocode. Also, it somewhat sucks to hear that my theories are wrong and that I haven't solved Robocode. :P (I'm building a gun to do exactly what you describe there, haha.) More hope for my Targeting Matrix idea then!

But yeah, very interesting. I'll have to look into exploring what game theory has to suggest about multiple waves firing. My intuition is that it shouldn't really matter too much (that is, assuming a fair bit of simplification- really each wave ought to be more or less independent given the cooldown provided the distance is long enough) but given the mere existence of walls, it's clear things are never so simple.

Anyway, thanks for the thoughts. That gives me something new to really think about.

--Ceasar 04:55, 25 March 2011 (UTC)

At the risk of sounding rude, I vote to remove or majorly condense the Game Theory section on this page. It's like most of the page and I don't feel like it really adds much... It feels more like a section of a Game Theory article that uses Wave Surfing as an example. --Voidious 18:50, 24 June 2011 (UTC)

Contents

Thread titleRepliesLast modified
BS Learning215:22, 2 December 2017
What type of wave surfing is this?309:50, 21 July 2017

BS Learning

You do not have permission to edit this page, for the following reasons:

  • The action you have requested is limited to users in the group: Users.
  • You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.

You can view and copy the source of this page.

 

Return to Thread:Talk:Wave Surfing/BS Learning.

shadowed area isn’t really safe until your bullet passes enemy wave. so just do it every time enemy wave breaks.

keep tabs on shadowed area as in keeping track of hits, and retrieve them in the same way. then shadowed area could be handled like real shadows, but weighted.

Xor (talk)15:16, 2 December 2017

I am going to do it like that in Oculus 1.3.3. Also I was too lazy to calculate without bullets passing the waves so I did it by simulation. It will be easier to add.

Dsekercioglu (talk)15:22, 2 December 2017
 
 

What type of wave surfing is this?

I made this for a micro bot. It is wave surfing but it is not Goto or True wave surfing. It is something more like gravity surfing.Is there a name for this? Sorry for pasting the all code.

    public static double[][] m = new double[3][5];
    double enemyEnergy = 100;
    ArrayList<Wave> waves = new ArrayList<>();

    public void run() {
        setTurnRadarRightRadians(Double.POSITIVE_INFINITY);
    }

    public void onScannedRobot(robocode.ScannedRobotEvent e) {
        double bulletPower;
        double absoluteBearing = e.getBearingRadians() + getHeadingRadians();
        double lateralVelocity = getVelocity() * Math.sin(getHeadingRadians() - absoluteBearing + Math.PI);
        if ((bulletPower = (enemyEnergy - (enemyEnergy = e.getEnergy()))) <= 3 && bulletPower >= 0.09) {
            waves.add(new Wave(bulletPower, getHeadingRadians() + e.getBearingRadians() + Math.PI, e.getDistance(), (int)Math.round(lateralVelocity / 8)));
        }
        double max = 0;
        double dir = 0;
        for (int i = 0; i < waves.size(); i++) {
            Wave w = waves.get(i);
            w.move += getVelocity();
            w.time--;
            if (w.time <= 0) {
                waves.remove(w);
                i--;
            } else {
                dir -= w.dir / w.time;
                max += 1 / w.time;
            }
        }
        dir /= max;
        setAhead(Math.max(-1, Math.min(dir, 1)) * 36);
        setTurnRightRadians(normalRelativeAngle(wallSmoothing(getX(), getY(), getHeadingRadians() + e.getBearingRadians() + Math.PI / 2, (int)Math.signum(dir)) - getHeadingRadians())); //Staying perpendicular to the enemy to have a large escape angle
        setTurnRadarLeftRadians(getRadarTurnRemaining());
        System.out.println(Arrays.deepToString(m));
    }

    public void onHitByBullet(robocode.HitByBulletEvent e) {
        if (!waves.isEmpty()) {
            Wave w = waves.get(0);
            m[w.segment][w.segment2] += Math.signum(w.move);
        }
    }

    public void onHitWall(robocode.HitWallEvent e) {
        onHitByBullet(null);
    }

    public double wallSmoothing(double x, double y, double currentAngle, int dir) {
        if (dir != 0) {
            currentAngle += Math.PI / 20 * dir;//This is optional. I use it to get away from the enemy
            Rectangle2D battleField = new Rectangle(25, 25, 775, 575);
            while (!battleField.contains(x + Math.sin(currentAngle) * 160 * dir, y + Math.cos(currentAngle) * 160 * dir)) {
                currentAngle -= Math.PI / 45 * dir;
            }
        }
        return currentAngle;
    }

    public class Wave {

        double speed;
        double mea;
        double time;
        int segment;
        int segment2;
        double dir;
        double move = 0;

        public Wave(double power, double angle, double distance, int segment) {
            time = distance / (speed = (20 - 3 * power));
            mea = 8 / speed;
            this.segment = segment + 1;
            dir = (int)Math.signum(m[segment + 1][segment2 = (int)Math.round(time / 91 * 5)]);
            if(dir == 0) {
                dir = Math.random() < 0.5 ? -1: 1;
            }
        }
    }
Dsekercioglu (talk)13:34, 12 June 2017

I'm not sure I follow your logic with this. Could you explain how the pieces are meant to work?

Also, there might be a bug with w.move += getVelocity();, since that is the velocity of the robot not the wave.

Skilgannon (talk)23:32, 19 July 2017

To predict where to move: robot sums the predictions of the waves weighted by time since fired. After if it is positive it moves ahead, if it is negative it moves back.

To learn: Every wave gets the total velocity of the robot while it moves, not deleted. If the wave hits, it gets the sign of the total velocity and add it to it's slices.

When a wave is fired it gets the value in it's slices and sets it's prediction to it.

Dsekercioglu (talk)09:05, 21 July 2017
 

Sorry if the weighted sum is positive, it moves back.

Dsekercioglu (talk)09:50, 21 July 2017